00:00:00.001 Started by upstream project "autotest-per-patch" build number 130561 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:07.383 The recommended git tool is: git 00:00:07.383 using credential 00000000-0000-0000-0000-000000000002 00:00:07.385 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.395 Fetching changes from the remote Git repository 00:00:07.398 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.408 Using shallow fetch with depth 1 00:00:07.408 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.408 > git --version # timeout=10 00:00:07.419 > git --version # 'git version 2.39.2' 00:00:07.419 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.431 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.431 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:46.141 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:46.156 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:46.168 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:46.168 > git config core.sparsecheckout # timeout=10 00:00:46.180 > git read-tree -mu HEAD # timeout=10 00:00:46.196 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:46.215 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:46.215 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:46.304 [Pipeline] Start of Pipeline 00:00:46.319 [Pipeline] library 00:00:46.321 Loading library shm_lib@master 00:00:46.321 Library shm_lib@master is cached. Copying from home. 00:00:46.336 [Pipeline] node 00:01:01.339 Still waiting to schedule task 00:01:01.339 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:12.262 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:12.264 [Pipeline] { 00:02:12.278 [Pipeline] catchError 00:02:12.280 [Pipeline] { 00:02:12.298 [Pipeline] wrap 00:02:12.309 [Pipeline] { 00:02:12.318 [Pipeline] stage 00:02:12.321 [Pipeline] { (Prologue) 00:02:12.346 [Pipeline] echo 00:02:12.348 Node: VM-host-SM16 00:02:12.355 [Pipeline] cleanWs 00:02:12.364 [WS-CLEANUP] Deleting project workspace... 00:02:12.364 [WS-CLEANUP] Deferred wipeout is used... 00:02:12.370 [WS-CLEANUP] done 00:02:12.577 [Pipeline] setCustomBuildProperty 00:02:12.683 [Pipeline] httpRequest 00:02:13.122 [Pipeline] echo 00:02:13.124 Sorcerer 10.211.164.101 is alive 00:02:13.134 [Pipeline] retry 00:02:13.136 [Pipeline] { 00:02:13.148 [Pipeline] httpRequest 00:02:13.153 HttpMethod: GET 00:02:13.153 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:02:13.153 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:02:13.164 Response Code: HTTP/1.1 200 OK 00:02:13.165 Success: Status code 200 is in the accepted range: 200,404 00:02:13.165 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:02:15.526 [Pipeline] } 00:02:15.543 [Pipeline] // retry 00:02:15.550 [Pipeline] sh 00:02:15.845 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:02:15.862 [Pipeline] httpRequest 00:02:16.307 [Pipeline] echo 00:02:16.309 Sorcerer 10.211.164.101 is alive 00:02:16.319 [Pipeline] retry 00:02:16.321 [Pipeline] { 00:02:16.335 [Pipeline] httpRequest 00:02:16.339 HttpMethod: GET 00:02:16.340 URL: http://10.211.164.101/packages/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:16.340 Sending request to url: http://10.211.164.101/packages/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:16.345 Response Code: HTTP/1.1 200 OK 00:02:16.345 Success: Status code 200 is in the accepted range: 200,404 00:02:16.346 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:40.062 [Pipeline] } 00:02:40.079 [Pipeline] // retry 00:02:40.086 [Pipeline] sh 00:02:40.362 + tar --no-same-owner -xf spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:43.651 [Pipeline] sh 00:02:43.968 + git -C spdk log --oneline -n5 00:02:43.968 3a41ae5b3 bdev/nvme: controller failover/multipath doc change 00:02:43.968 7b38c9ede bdev/nvme: changed default config to multipath 00:02:43.968 fefe29c8c bdev/nvme: ctrl config consistency check 00:02:43.968 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:43.968 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:02:43.989 [Pipeline] writeFile 00:02:44.007 [Pipeline] sh 00:02:44.288 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:44.300 [Pipeline] sh 00:02:44.581 + cat autorun-spdk.conf 00:02:44.581 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.581 SPDK_TEST_NVMF=1 00:02:44.581 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:44.581 SPDK_TEST_URING=1 00:02:44.581 SPDK_TEST_USDT=1 00:02:44.581 SPDK_RUN_UBSAN=1 00:02:44.581 NET_TYPE=virt 00:02:44.581 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:44.588 RUN_NIGHTLY=0 00:02:44.590 [Pipeline] } 00:02:44.605 [Pipeline] // stage 00:02:44.620 [Pipeline] stage 00:02:44.622 [Pipeline] { (Run VM) 00:02:44.637 [Pipeline] sh 00:02:44.918 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:44.918 + echo 'Start stage prepare_nvme.sh' 00:02:44.918 Start stage prepare_nvme.sh 00:02:44.918 + [[ -n 6 ]] 00:02:44.918 + disk_prefix=ex6 00:02:44.918 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:44.918 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:44.918 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:44.918 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.918 ++ SPDK_TEST_NVMF=1 00:02:44.918 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:44.918 ++ SPDK_TEST_URING=1 00:02:44.918 ++ SPDK_TEST_USDT=1 00:02:44.918 ++ SPDK_RUN_UBSAN=1 00:02:44.918 ++ NET_TYPE=virt 00:02:44.918 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:44.918 ++ RUN_NIGHTLY=0 00:02:44.918 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:44.918 + nvme_files=() 00:02:44.918 + declare -A nvme_files 00:02:44.918 + backend_dir=/var/lib/libvirt/images/backends 00:02:44.918 + nvme_files['nvme.img']=5G 00:02:44.918 + nvme_files['nvme-cmb.img']=5G 00:02:44.918 + nvme_files['nvme-multi0.img']=4G 00:02:44.918 + nvme_files['nvme-multi1.img']=4G 00:02:44.918 + nvme_files['nvme-multi2.img']=4G 00:02:44.918 + nvme_files['nvme-openstack.img']=8G 00:02:44.918 + nvme_files['nvme-zns.img']=5G 00:02:44.918 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:44.918 + (( SPDK_TEST_FTL == 1 )) 00:02:44.918 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:44.918 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:44.918 + for nvme in "${!nvme_files[@]}" 00:02:44.918 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:44.918 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:44.918 + for nvme in "${!nvme_files[@]}" 00:02:44.918 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:44.918 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:44.918 + for nvme in "${!nvme_files[@]}" 00:02:44.918 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:44.918 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:44.918 + for nvme in "${!nvme_files[@]}" 00:02:44.918 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:44.919 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:44.919 + for nvme in "${!nvme_files[@]}" 00:02:44.919 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:44.919 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:44.919 + for nvme in "${!nvme_files[@]}" 00:02:44.919 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:44.919 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:44.919 + for nvme in "${!nvme_files[@]}" 00:02:44.919 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:44.919 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:44.919 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:44.919 + echo 'End stage prepare_nvme.sh' 00:02:44.919 End stage prepare_nvme.sh 00:02:44.931 [Pipeline] sh 00:02:45.215 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:45.215 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:02:45.215 00:02:45.215 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:45.215 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:45.215 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:45.215 HELP=0 00:02:45.215 DRY_RUN=0 00:02:45.215 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:02:45.215 NVME_DISKS_TYPE=nvme,nvme, 00:02:45.215 NVME_AUTO_CREATE=0 00:02:45.215 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:02:45.215 NVME_CMB=,, 00:02:45.215 NVME_PMR=,, 00:02:45.215 NVME_ZNS=,, 00:02:45.216 NVME_MS=,, 00:02:45.216 NVME_FDP=,, 00:02:45.216 SPDK_VAGRANT_DISTRO=fedora39 00:02:45.216 SPDK_VAGRANT_VMCPU=10 00:02:45.216 SPDK_VAGRANT_VMRAM=12288 00:02:45.216 SPDK_VAGRANT_PROVIDER=libvirt 00:02:45.216 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:45.216 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:45.216 SPDK_OPENSTACK_NETWORK=0 00:02:45.216 VAGRANT_PACKAGE_BOX=0 00:02:45.216 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:45.216 FORCE_DISTRO=true 00:02:45.216 VAGRANT_BOX_VERSION= 00:02:45.216 EXTRA_VAGRANTFILES= 00:02:45.216 NIC_MODEL=e1000 00:02:45.216 00:02:45.216 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:45.216 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:48.521 Bringing machine 'default' up with 'libvirt' provider... 00:02:49.456 ==> default: Creating image (snapshot of base box volume). 00:02:49.456 ==> default: Creating domain with the following settings... 00:02:49.456 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727789819_a63632e9011bce6435c9 00:02:49.456 ==> default: -- Domain type: kvm 00:02:49.456 ==> default: -- Cpus: 10 00:02:49.456 ==> default: -- Feature: acpi 00:02:49.456 ==> default: -- Feature: apic 00:02:49.456 ==> default: -- Feature: pae 00:02:49.456 ==> default: -- Memory: 12288M 00:02:49.456 ==> default: -- Memory Backing: hugepages: 00:02:49.456 ==> default: -- Management MAC: 00:02:49.456 ==> default: -- Loader: 00:02:49.456 ==> default: -- Nvram: 00:02:49.456 ==> default: -- Base box: spdk/fedora39 00:02:49.456 ==> default: -- Storage pool: default 00:02:49.456 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727789819_a63632e9011bce6435c9.img (20G) 00:02:49.456 ==> default: -- Volume Cache: default 00:02:49.456 ==> default: -- Kernel: 00:02:49.456 ==> default: -- Initrd: 00:02:49.456 ==> default: -- Graphics Type: vnc 00:02:49.456 ==> default: -- Graphics Port: -1 00:02:49.456 ==> default: -- Graphics IP: 127.0.0.1 00:02:49.456 ==> default: -- Graphics Password: Not defined 00:02:49.456 ==> default: -- Video Type: cirrus 00:02:49.456 ==> default: -- Video VRAM: 9216 00:02:49.456 ==> default: -- Sound Type: 00:02:49.456 ==> default: -- Keymap: en-us 00:02:49.456 ==> default: -- TPM Path: 00:02:49.456 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:49.456 ==> default: -- Command line args: 00:02:49.456 ==> default: -> value=-device, 00:02:49.456 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:49.456 ==> default: -> value=-drive, 00:02:49.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:49.456 ==> default: -> value=-device, 00:02:49.456 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:49.456 ==> default: -> value=-device, 00:02:49.456 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:49.456 ==> default: -> value=-drive, 00:02:49.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:49.456 ==> default: -> value=-device, 00:02:49.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:49.456 ==> default: -> value=-drive, 00:02:49.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:49.456 ==> default: -> value=-device, 00:02:49.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:49.456 ==> default: -> value=-drive, 00:02:49.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:49.456 ==> default: -> value=-device, 00:02:49.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:49.715 ==> default: Creating shared folders metadata... 00:02:49.715 ==> default: Starting domain. 00:02:51.611 ==> default: Waiting for domain to get an IP address... 00:03:06.488 ==> default: Waiting for SSH to become available... 00:03:07.857 ==> default: Configuring and enabling network interfaces... 00:03:13.121 default: SSH address: 192.168.121.212:22 00:03:13.121 default: SSH username: vagrant 00:03:13.121 default: SSH auth method: private key 00:03:15.022 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:23.134 ==> default: Mounting SSHFS shared folder... 00:03:24.069 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:24.069 ==> default: Checking Mount.. 00:03:25.461 ==> default: Folder Successfully Mounted! 00:03:25.461 ==> default: Running provisioner: file... 00:03:26.027 default: ~/.gitconfig => .gitconfig 00:03:26.285 00:03:26.285 SUCCESS! 00:03:26.285 00:03:26.285 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:26.285 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:26.285 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:26.285 00:03:26.294 [Pipeline] } 00:03:26.310 [Pipeline] // stage 00:03:26.320 [Pipeline] dir 00:03:26.321 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:03:26.322 [Pipeline] { 00:03:26.336 [Pipeline] catchError 00:03:26.338 [Pipeline] { 00:03:26.351 [Pipeline] sh 00:03:26.629 + vagrant ssh-config --host vagrant 00:03:26.629 + sed -ne /^Host/,$p 00:03:26.629 + tee ssh_conf 00:03:30.813 Host vagrant 00:03:30.813 HostName 192.168.121.212 00:03:30.813 User vagrant 00:03:30.813 Port 22 00:03:30.813 UserKnownHostsFile /dev/null 00:03:30.813 StrictHostKeyChecking no 00:03:30.813 PasswordAuthentication no 00:03:30.813 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:30.813 IdentitiesOnly yes 00:03:30.813 LogLevel FATAL 00:03:30.813 ForwardAgent yes 00:03:30.813 ForwardX11 yes 00:03:30.813 00:03:30.827 [Pipeline] withEnv 00:03:30.829 [Pipeline] { 00:03:30.844 [Pipeline] sh 00:03:31.124 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:31.124 source /etc/os-release 00:03:31.124 [[ -e /image.version ]] && img=$(< /image.version) 00:03:31.124 # Minimal, systemd-like check. 00:03:31.124 if [[ -e /.dockerenv ]]; then 00:03:31.124 # Clear garbage from the node's name: 00:03:31.124 # agt-er_autotest_547-896 -> autotest_547-896 00:03:31.124 # $HOSTNAME is the actual container id 00:03:31.124 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:31.124 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:31.124 # We can assume this is a mount from a host where container is running, 00:03:31.124 # so fetch its hostname to easily identify the target swarm worker. 00:03:31.124 container="$(< /etc/hostname) ($agent)" 00:03:31.124 else 00:03:31.124 # Fallback 00:03:31.124 container=$agent 00:03:31.124 fi 00:03:31.124 fi 00:03:31.124 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:31.124 00:03:31.393 [Pipeline] } 00:03:31.410 [Pipeline] // withEnv 00:03:31.419 [Pipeline] setCustomBuildProperty 00:03:31.434 [Pipeline] stage 00:03:31.436 [Pipeline] { (Tests) 00:03:31.455 [Pipeline] sh 00:03:31.738 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:32.010 [Pipeline] sh 00:03:32.289 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:32.302 [Pipeline] timeout 00:03:32.303 Timeout set to expire in 1 hr 0 min 00:03:32.304 [Pipeline] { 00:03:32.318 [Pipeline] sh 00:03:32.652 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:33.219 HEAD is now at 3a41ae5b3 bdev/nvme: controller failover/multipath doc change 00:03:33.230 [Pipeline] sh 00:03:33.508 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:33.778 [Pipeline] sh 00:03:34.055 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:34.325 [Pipeline] sh 00:03:34.603 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:34.880 ++ readlink -f spdk_repo 00:03:34.880 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:34.880 + [[ -n /home/vagrant/spdk_repo ]] 00:03:34.880 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:34.880 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:34.880 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:34.880 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:34.880 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:34.880 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:34.880 + cd /home/vagrant/spdk_repo 00:03:34.880 + source /etc/os-release 00:03:34.880 ++ NAME='Fedora Linux' 00:03:34.880 ++ VERSION='39 (Cloud Edition)' 00:03:34.880 ++ ID=fedora 00:03:34.880 ++ VERSION_ID=39 00:03:34.880 ++ VERSION_CODENAME= 00:03:34.880 ++ PLATFORM_ID=platform:f39 00:03:34.880 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:34.880 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:34.880 ++ LOGO=fedora-logo-icon 00:03:34.880 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:34.880 ++ HOME_URL=https://fedoraproject.org/ 00:03:34.880 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:34.880 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:34.880 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:34.880 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:34.880 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:34.880 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:34.880 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:34.880 ++ SUPPORT_END=2024-11-12 00:03:34.880 ++ VARIANT='Cloud Edition' 00:03:34.880 ++ VARIANT_ID=cloud 00:03:34.880 + uname -a 00:03:34.880 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:34.880 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:35.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.137 Hugepages 00:03:35.137 node hugesize free / total 00:03:35.137 node0 1048576kB 0 / 0 00:03:35.137 node0 2048kB 0 / 0 00:03:35.137 00:03:35.137 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.395 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:35.395 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:35.395 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:35.395 + rm -f /tmp/spdk-ld-path 00:03:35.395 + source autorun-spdk.conf 00:03:35.395 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:35.395 ++ SPDK_TEST_NVMF=1 00:03:35.395 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:35.395 ++ SPDK_TEST_URING=1 00:03:35.395 ++ SPDK_TEST_USDT=1 00:03:35.395 ++ SPDK_RUN_UBSAN=1 00:03:35.395 ++ NET_TYPE=virt 00:03:35.395 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:35.395 ++ RUN_NIGHTLY=0 00:03:35.395 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:35.395 + [[ -n '' ]] 00:03:35.395 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:35.395 + for M in /var/spdk/build-*-manifest.txt 00:03:35.395 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:35.395 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:35.395 + for M in /var/spdk/build-*-manifest.txt 00:03:35.395 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:35.395 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:35.395 + for M in /var/spdk/build-*-manifest.txt 00:03:35.395 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:35.395 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:35.395 ++ uname 00:03:35.395 + [[ Linux == \L\i\n\u\x ]] 00:03:35.395 + sudo dmesg -T 00:03:35.395 + sudo dmesg --clear 00:03:35.395 + dmesg_pid=5375 00:03:35.395 + [[ Fedora Linux == FreeBSD ]] 00:03:35.395 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:35.395 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:35.395 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:35.395 + [[ -x /usr/src/fio-static/fio ]] 00:03:35.395 + sudo dmesg -Tw 00:03:35.395 + export FIO_BIN=/usr/src/fio-static/fio 00:03:35.395 + FIO_BIN=/usr/src/fio-static/fio 00:03:35.395 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:35.395 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:35.395 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:35.395 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:35.395 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:35.395 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:35.395 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:35.395 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:35.395 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:35.395 Test configuration: 00:03:35.395 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:35.395 SPDK_TEST_NVMF=1 00:03:35.395 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:35.395 SPDK_TEST_URING=1 00:03:35.395 SPDK_TEST_USDT=1 00:03:35.395 SPDK_RUN_UBSAN=1 00:03:35.395 NET_TYPE=virt 00:03:35.395 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:35.654 RUN_NIGHTLY=0 13:37:45 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:35.654 13:37:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:35.654 13:37:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:35.654 13:37:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:35.654 13:37:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:35.654 13:37:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:35.654 13:37:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.654 13:37:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.654 13:37:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.654 13:37:45 -- paths/export.sh@5 -- $ export PATH 00:03:35.654 13:37:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.654 13:37:45 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:35.654 13:37:45 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:35.654 13:37:45 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727789865.XXXXXX 00:03:35.654 13:37:45 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727789865.AQOQFB 00:03:35.654 13:37:45 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:35.654 13:37:45 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:03:35.654 13:37:45 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:35.654 13:37:45 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:35.654 13:37:45 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:35.654 13:37:45 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:35.654 13:37:45 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:35.654 13:37:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:35.654 13:37:45 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:35.654 13:37:45 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:35.654 13:37:45 -- pm/common@17 -- $ local monitor 00:03:35.654 13:37:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.654 13:37:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.654 13:37:45 -- pm/common@25 -- $ sleep 1 00:03:35.654 13:37:45 -- pm/common@21 -- $ date +%s 00:03:35.654 13:37:45 -- pm/common@21 -- $ date +%s 00:03:35.654 13:37:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727789865 00:03:35.654 13:37:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727789865 00:03:35.654 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727789865_collect-cpu-load.pm.log 00:03:35.654 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727789865_collect-vmstat.pm.log 00:03:36.587 13:37:46 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:36.587 13:37:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:36.587 13:37:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:36.587 13:37:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:36.587 13:37:46 -- spdk/autobuild.sh@16 -- $ date -u 00:03:36.587 Tue Oct 1 01:37:46 PM UTC 2024 00:03:36.587 13:37:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:36.587 v25.01-pre-20-g3a41ae5b3 00:03:36.587 13:37:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:36.587 13:37:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:36.587 13:37:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:36.587 13:37:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:36.587 13:37:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:36.587 13:37:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.587 ************************************ 00:03:36.587 START TEST ubsan 00:03:36.587 ************************************ 00:03:36.587 13:37:46 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:36.587 using ubsan 00:03:36.587 00:03:36.587 real 0m0.000s 00:03:36.587 user 0m0.000s 00:03:36.587 sys 0m0.000s 00:03:36.587 13:37:46 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:36.587 13:37:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:36.587 ************************************ 00:03:36.587 END TEST ubsan 00:03:36.587 ************************************ 00:03:36.587 13:37:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:36.587 13:37:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:36.587 13:37:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:36.587 13:37:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:36.587 13:37:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:36.587 13:37:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:36.587 13:37:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:36.587 13:37:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:36.587 13:37:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:36.845 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:36.845 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:37.102 Using 'verbs' RDMA provider 00:03:50.260 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:05.171 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:05.171 Creating mk/config.mk...done. 00:04:05.171 Creating mk/cc.flags.mk...done. 00:04:05.171 Type 'make' to build. 00:04:05.171 13:38:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:05.171 13:38:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:05.171 13:38:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:05.171 13:38:13 -- common/autotest_common.sh@10 -- $ set +x 00:04:05.171 ************************************ 00:04:05.171 START TEST make 00:04:05.171 ************************************ 00:04:05.171 13:38:13 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:05.171 make[1]: Nothing to be done for 'all'. 00:04:17.418 The Meson build system 00:04:17.418 Version: 1.5.0 00:04:17.418 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:17.418 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:17.418 Build type: native build 00:04:17.418 Program cat found: YES (/usr/bin/cat) 00:04:17.418 Project name: DPDK 00:04:17.418 Project version: 24.03.0 00:04:17.418 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:17.418 C linker for the host machine: cc ld.bfd 2.40-14 00:04:17.418 Host machine cpu family: x86_64 00:04:17.418 Host machine cpu: x86_64 00:04:17.418 Message: ## Building in Developer Mode ## 00:04:17.418 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:17.418 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:17.418 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:17.418 Program python3 found: YES (/usr/bin/python3) 00:04:17.418 Program cat found: YES (/usr/bin/cat) 00:04:17.418 Compiler for C supports arguments -march=native: YES 00:04:17.418 Checking for size of "void *" : 8 00:04:17.418 Checking for size of "void *" : 8 (cached) 00:04:17.418 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:17.418 Library m found: YES 00:04:17.418 Library numa found: YES 00:04:17.418 Has header "numaif.h" : YES 00:04:17.418 Library fdt found: NO 00:04:17.418 Library execinfo found: NO 00:04:17.418 Has header "execinfo.h" : YES 00:04:17.418 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:17.418 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:17.418 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:17.418 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:17.418 Run-time dependency openssl found: YES 3.1.1 00:04:17.418 Run-time dependency libpcap found: YES 1.10.4 00:04:17.418 Has header "pcap.h" with dependency libpcap: YES 00:04:17.418 Compiler for C supports arguments -Wcast-qual: YES 00:04:17.418 Compiler for C supports arguments -Wdeprecated: YES 00:04:17.418 Compiler for C supports arguments -Wformat: YES 00:04:17.418 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:17.418 Compiler for C supports arguments -Wformat-security: NO 00:04:17.418 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:17.418 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:17.418 Compiler for C supports arguments -Wnested-externs: YES 00:04:17.418 Compiler for C supports arguments -Wold-style-definition: YES 00:04:17.418 Compiler for C supports arguments -Wpointer-arith: YES 00:04:17.418 Compiler for C supports arguments -Wsign-compare: YES 00:04:17.418 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:17.418 Compiler for C supports arguments -Wundef: YES 00:04:17.418 Compiler for C supports arguments -Wwrite-strings: YES 00:04:17.418 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:17.418 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:17.418 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:17.418 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:17.418 Program objdump found: YES (/usr/bin/objdump) 00:04:17.418 Compiler for C supports arguments -mavx512f: YES 00:04:17.418 Checking if "AVX512 checking" compiles: YES 00:04:17.418 Fetching value of define "__SSE4_2__" : 1 00:04:17.418 Fetching value of define "__AES__" : 1 00:04:17.418 Fetching value of define "__AVX__" : 1 00:04:17.418 Fetching value of define "__AVX2__" : 1 00:04:17.418 Fetching value of define "__AVX512BW__" : (undefined) 00:04:17.418 Fetching value of define "__AVX512CD__" : (undefined) 00:04:17.418 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:17.418 Fetching value of define "__AVX512F__" : (undefined) 00:04:17.418 Fetching value of define "__AVX512VL__" : (undefined) 00:04:17.418 Fetching value of define "__PCLMUL__" : 1 00:04:17.418 Fetching value of define "__RDRND__" : 1 00:04:17.418 Fetching value of define "__RDSEED__" : 1 00:04:17.418 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:17.418 Fetching value of define "__znver1__" : (undefined) 00:04:17.418 Fetching value of define "__znver2__" : (undefined) 00:04:17.418 Fetching value of define "__znver3__" : (undefined) 00:04:17.418 Fetching value of define "__znver4__" : (undefined) 00:04:17.418 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:17.418 Message: lib/log: Defining dependency "log" 00:04:17.418 Message: lib/kvargs: Defining dependency "kvargs" 00:04:17.418 Message: lib/telemetry: Defining dependency "telemetry" 00:04:17.418 Checking for function "getentropy" : NO 00:04:17.418 Message: lib/eal: Defining dependency "eal" 00:04:17.418 Message: lib/ring: Defining dependency "ring" 00:04:17.418 Message: lib/rcu: Defining dependency "rcu" 00:04:17.418 Message: lib/mempool: Defining dependency "mempool" 00:04:17.418 Message: lib/mbuf: Defining dependency "mbuf" 00:04:17.418 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:17.418 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:17.418 Compiler for C supports arguments -mpclmul: YES 00:04:17.418 Compiler for C supports arguments -maes: YES 00:04:17.418 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:17.418 Compiler for C supports arguments -mavx512bw: YES 00:04:17.418 Compiler for C supports arguments -mavx512dq: YES 00:04:17.418 Compiler for C supports arguments -mavx512vl: YES 00:04:17.418 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:17.418 Compiler for C supports arguments -mavx2: YES 00:04:17.418 Compiler for C supports arguments -mavx: YES 00:04:17.418 Message: lib/net: Defining dependency "net" 00:04:17.418 Message: lib/meter: Defining dependency "meter" 00:04:17.418 Message: lib/ethdev: Defining dependency "ethdev" 00:04:17.418 Message: lib/pci: Defining dependency "pci" 00:04:17.418 Message: lib/cmdline: Defining dependency "cmdline" 00:04:17.418 Message: lib/hash: Defining dependency "hash" 00:04:17.418 Message: lib/timer: Defining dependency "timer" 00:04:17.418 Message: lib/compressdev: Defining dependency "compressdev" 00:04:17.419 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:17.419 Message: lib/dmadev: Defining dependency "dmadev" 00:04:17.419 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:17.419 Message: lib/power: Defining dependency "power" 00:04:17.419 Message: lib/reorder: Defining dependency "reorder" 00:04:17.419 Message: lib/security: Defining dependency "security" 00:04:17.419 Has header "linux/userfaultfd.h" : YES 00:04:17.419 Has header "linux/vduse.h" : YES 00:04:17.419 Message: lib/vhost: Defining dependency "vhost" 00:04:17.419 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:17.419 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:17.419 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:17.419 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:17.419 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:17.419 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:17.419 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:17.419 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:17.419 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:17.419 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:17.419 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:17.419 Configuring doxy-api-html.conf using configuration 00:04:17.419 Configuring doxy-api-man.conf using configuration 00:04:17.419 Program mandb found: YES (/usr/bin/mandb) 00:04:17.419 Program sphinx-build found: NO 00:04:17.419 Configuring rte_build_config.h using configuration 00:04:17.419 Message: 00:04:17.419 ================= 00:04:17.419 Applications Enabled 00:04:17.419 ================= 00:04:17.419 00:04:17.419 apps: 00:04:17.419 00:04:17.419 00:04:17.419 Message: 00:04:17.419 ================= 00:04:17.419 Libraries Enabled 00:04:17.419 ================= 00:04:17.419 00:04:17.419 libs: 00:04:17.419 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:17.419 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:17.419 cryptodev, dmadev, power, reorder, security, vhost, 00:04:17.419 00:04:17.419 Message: 00:04:17.419 =============== 00:04:17.419 Drivers Enabled 00:04:17.419 =============== 00:04:17.419 00:04:17.419 common: 00:04:17.419 00:04:17.419 bus: 00:04:17.419 pci, vdev, 00:04:17.419 mempool: 00:04:17.419 ring, 00:04:17.419 dma: 00:04:17.419 00:04:17.419 net: 00:04:17.419 00:04:17.419 crypto: 00:04:17.419 00:04:17.419 compress: 00:04:17.419 00:04:17.419 vdpa: 00:04:17.419 00:04:17.419 00:04:17.419 Message: 00:04:17.419 ================= 00:04:17.419 Content Skipped 00:04:17.419 ================= 00:04:17.419 00:04:17.419 apps: 00:04:17.419 dumpcap: explicitly disabled via build config 00:04:17.419 graph: explicitly disabled via build config 00:04:17.419 pdump: explicitly disabled via build config 00:04:17.419 proc-info: explicitly disabled via build config 00:04:17.419 test-acl: explicitly disabled via build config 00:04:17.419 test-bbdev: explicitly disabled via build config 00:04:17.419 test-cmdline: explicitly disabled via build config 00:04:17.419 test-compress-perf: explicitly disabled via build config 00:04:17.419 test-crypto-perf: explicitly disabled via build config 00:04:17.419 test-dma-perf: explicitly disabled via build config 00:04:17.419 test-eventdev: explicitly disabled via build config 00:04:17.419 test-fib: explicitly disabled via build config 00:04:17.419 test-flow-perf: explicitly disabled via build config 00:04:17.419 test-gpudev: explicitly disabled via build config 00:04:17.419 test-mldev: explicitly disabled via build config 00:04:17.419 test-pipeline: explicitly disabled via build config 00:04:17.419 test-pmd: explicitly disabled via build config 00:04:17.419 test-regex: explicitly disabled via build config 00:04:17.419 test-sad: explicitly disabled via build config 00:04:17.419 test-security-perf: explicitly disabled via build config 00:04:17.419 00:04:17.419 libs: 00:04:17.419 argparse: explicitly disabled via build config 00:04:17.419 metrics: explicitly disabled via build config 00:04:17.419 acl: explicitly disabled via build config 00:04:17.419 bbdev: explicitly disabled via build config 00:04:17.419 bitratestats: explicitly disabled via build config 00:04:17.419 bpf: explicitly disabled via build config 00:04:17.419 cfgfile: explicitly disabled via build config 00:04:17.419 distributor: explicitly disabled via build config 00:04:17.419 efd: explicitly disabled via build config 00:04:17.419 eventdev: explicitly disabled via build config 00:04:17.419 dispatcher: explicitly disabled via build config 00:04:17.419 gpudev: explicitly disabled via build config 00:04:17.419 gro: explicitly disabled via build config 00:04:17.419 gso: explicitly disabled via build config 00:04:17.419 ip_frag: explicitly disabled via build config 00:04:17.419 jobstats: explicitly disabled via build config 00:04:17.419 latencystats: explicitly disabled via build config 00:04:17.419 lpm: explicitly disabled via build config 00:04:17.419 member: explicitly disabled via build config 00:04:17.419 pcapng: explicitly disabled via build config 00:04:17.419 rawdev: explicitly disabled via build config 00:04:17.419 regexdev: explicitly disabled via build config 00:04:17.419 mldev: explicitly disabled via build config 00:04:17.419 rib: explicitly disabled via build config 00:04:17.419 sched: explicitly disabled via build config 00:04:17.419 stack: explicitly disabled via build config 00:04:17.419 ipsec: explicitly disabled via build config 00:04:17.419 pdcp: explicitly disabled via build config 00:04:17.419 fib: explicitly disabled via build config 00:04:17.419 port: explicitly disabled via build config 00:04:17.419 pdump: explicitly disabled via build config 00:04:17.419 table: explicitly disabled via build config 00:04:17.419 pipeline: explicitly disabled via build config 00:04:17.419 graph: explicitly disabled via build config 00:04:17.419 node: explicitly disabled via build config 00:04:17.419 00:04:17.419 drivers: 00:04:17.419 common/cpt: not in enabled drivers build config 00:04:17.419 common/dpaax: not in enabled drivers build config 00:04:17.419 common/iavf: not in enabled drivers build config 00:04:17.419 common/idpf: not in enabled drivers build config 00:04:17.419 common/ionic: not in enabled drivers build config 00:04:17.419 common/mvep: not in enabled drivers build config 00:04:17.419 common/octeontx: not in enabled drivers build config 00:04:17.419 bus/auxiliary: not in enabled drivers build config 00:04:17.419 bus/cdx: not in enabled drivers build config 00:04:17.419 bus/dpaa: not in enabled drivers build config 00:04:17.419 bus/fslmc: not in enabled drivers build config 00:04:17.419 bus/ifpga: not in enabled drivers build config 00:04:17.419 bus/platform: not in enabled drivers build config 00:04:17.419 bus/uacce: not in enabled drivers build config 00:04:17.419 bus/vmbus: not in enabled drivers build config 00:04:17.419 common/cnxk: not in enabled drivers build config 00:04:17.419 common/mlx5: not in enabled drivers build config 00:04:17.419 common/nfp: not in enabled drivers build config 00:04:17.419 common/nitrox: not in enabled drivers build config 00:04:17.419 common/qat: not in enabled drivers build config 00:04:17.419 common/sfc_efx: not in enabled drivers build config 00:04:17.419 mempool/bucket: not in enabled drivers build config 00:04:17.419 mempool/cnxk: not in enabled drivers build config 00:04:17.419 mempool/dpaa: not in enabled drivers build config 00:04:17.419 mempool/dpaa2: not in enabled drivers build config 00:04:17.419 mempool/octeontx: not in enabled drivers build config 00:04:17.419 mempool/stack: not in enabled drivers build config 00:04:17.419 dma/cnxk: not in enabled drivers build config 00:04:17.419 dma/dpaa: not in enabled drivers build config 00:04:17.419 dma/dpaa2: not in enabled drivers build config 00:04:17.419 dma/hisilicon: not in enabled drivers build config 00:04:17.419 dma/idxd: not in enabled drivers build config 00:04:17.419 dma/ioat: not in enabled drivers build config 00:04:17.419 dma/skeleton: not in enabled drivers build config 00:04:17.419 net/af_packet: not in enabled drivers build config 00:04:17.419 net/af_xdp: not in enabled drivers build config 00:04:17.419 net/ark: not in enabled drivers build config 00:04:17.419 net/atlantic: not in enabled drivers build config 00:04:17.419 net/avp: not in enabled drivers build config 00:04:17.419 net/axgbe: not in enabled drivers build config 00:04:17.419 net/bnx2x: not in enabled drivers build config 00:04:17.419 net/bnxt: not in enabled drivers build config 00:04:17.419 net/bonding: not in enabled drivers build config 00:04:17.419 net/cnxk: not in enabled drivers build config 00:04:17.419 net/cpfl: not in enabled drivers build config 00:04:17.419 net/cxgbe: not in enabled drivers build config 00:04:17.419 net/dpaa: not in enabled drivers build config 00:04:17.419 net/dpaa2: not in enabled drivers build config 00:04:17.419 net/e1000: not in enabled drivers build config 00:04:17.419 net/ena: not in enabled drivers build config 00:04:17.419 net/enetc: not in enabled drivers build config 00:04:17.419 net/enetfec: not in enabled drivers build config 00:04:17.419 net/enic: not in enabled drivers build config 00:04:17.419 net/failsafe: not in enabled drivers build config 00:04:17.419 net/fm10k: not in enabled drivers build config 00:04:17.419 net/gve: not in enabled drivers build config 00:04:17.419 net/hinic: not in enabled drivers build config 00:04:17.419 net/hns3: not in enabled drivers build config 00:04:17.419 net/i40e: not in enabled drivers build config 00:04:17.419 net/iavf: not in enabled drivers build config 00:04:17.419 net/ice: not in enabled drivers build config 00:04:17.419 net/idpf: not in enabled drivers build config 00:04:17.419 net/igc: not in enabled drivers build config 00:04:17.419 net/ionic: not in enabled drivers build config 00:04:17.419 net/ipn3ke: not in enabled drivers build config 00:04:17.419 net/ixgbe: not in enabled drivers build config 00:04:17.419 net/mana: not in enabled drivers build config 00:04:17.419 net/memif: not in enabled drivers build config 00:04:17.419 net/mlx4: not in enabled drivers build config 00:04:17.419 net/mlx5: not in enabled drivers build config 00:04:17.419 net/mvneta: not in enabled drivers build config 00:04:17.420 net/mvpp2: not in enabled drivers build config 00:04:17.420 net/netvsc: not in enabled drivers build config 00:04:17.420 net/nfb: not in enabled drivers build config 00:04:17.420 net/nfp: not in enabled drivers build config 00:04:17.420 net/ngbe: not in enabled drivers build config 00:04:17.420 net/null: not in enabled drivers build config 00:04:17.420 net/octeontx: not in enabled drivers build config 00:04:17.420 net/octeon_ep: not in enabled drivers build config 00:04:17.420 net/pcap: not in enabled drivers build config 00:04:17.420 net/pfe: not in enabled drivers build config 00:04:17.420 net/qede: not in enabled drivers build config 00:04:17.420 net/ring: not in enabled drivers build config 00:04:17.420 net/sfc: not in enabled drivers build config 00:04:17.420 net/softnic: not in enabled drivers build config 00:04:17.420 net/tap: not in enabled drivers build config 00:04:17.420 net/thunderx: not in enabled drivers build config 00:04:17.420 net/txgbe: not in enabled drivers build config 00:04:17.420 net/vdev_netvsc: not in enabled drivers build config 00:04:17.420 net/vhost: not in enabled drivers build config 00:04:17.420 net/virtio: not in enabled drivers build config 00:04:17.420 net/vmxnet3: not in enabled drivers build config 00:04:17.420 raw/*: missing internal dependency, "rawdev" 00:04:17.420 crypto/armv8: not in enabled drivers build config 00:04:17.420 crypto/bcmfs: not in enabled drivers build config 00:04:17.420 crypto/caam_jr: not in enabled drivers build config 00:04:17.420 crypto/ccp: not in enabled drivers build config 00:04:17.420 crypto/cnxk: not in enabled drivers build config 00:04:17.420 crypto/dpaa_sec: not in enabled drivers build config 00:04:17.420 crypto/dpaa2_sec: not in enabled drivers build config 00:04:17.420 crypto/ipsec_mb: not in enabled drivers build config 00:04:17.420 crypto/mlx5: not in enabled drivers build config 00:04:17.420 crypto/mvsam: not in enabled drivers build config 00:04:17.420 crypto/nitrox: not in enabled drivers build config 00:04:17.420 crypto/null: not in enabled drivers build config 00:04:17.420 crypto/octeontx: not in enabled drivers build config 00:04:17.420 crypto/openssl: not in enabled drivers build config 00:04:17.420 crypto/scheduler: not in enabled drivers build config 00:04:17.420 crypto/uadk: not in enabled drivers build config 00:04:17.420 crypto/virtio: not in enabled drivers build config 00:04:17.420 compress/isal: not in enabled drivers build config 00:04:17.420 compress/mlx5: not in enabled drivers build config 00:04:17.420 compress/nitrox: not in enabled drivers build config 00:04:17.420 compress/octeontx: not in enabled drivers build config 00:04:17.420 compress/zlib: not in enabled drivers build config 00:04:17.420 regex/*: missing internal dependency, "regexdev" 00:04:17.420 ml/*: missing internal dependency, "mldev" 00:04:17.420 vdpa/ifc: not in enabled drivers build config 00:04:17.420 vdpa/mlx5: not in enabled drivers build config 00:04:17.420 vdpa/nfp: not in enabled drivers build config 00:04:17.420 vdpa/sfc: not in enabled drivers build config 00:04:17.420 event/*: missing internal dependency, "eventdev" 00:04:17.420 baseband/*: missing internal dependency, "bbdev" 00:04:17.420 gpu/*: missing internal dependency, "gpudev" 00:04:17.420 00:04:17.420 00:04:17.420 Build targets in project: 85 00:04:17.420 00:04:17.420 DPDK 24.03.0 00:04:17.420 00:04:17.420 User defined options 00:04:17.420 buildtype : debug 00:04:17.420 default_library : shared 00:04:17.420 libdir : lib 00:04:17.420 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:17.420 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:17.420 c_link_args : 00:04:17.420 cpu_instruction_set: native 00:04:17.420 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:17.420 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:17.420 enable_docs : false 00:04:17.420 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:17.420 enable_kmods : false 00:04:17.420 max_lcores : 128 00:04:17.420 tests : false 00:04:17.420 00:04:17.420 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:17.986 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:17.986 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:17.986 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:17.986 [3/268] Linking static target lib/librte_kvargs.a 00:04:17.986 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:18.245 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:18.245 [6/268] Linking static target lib/librte_log.a 00:04:18.503 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.503 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:18.761 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:18.761 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:18.761 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:19.019 [12/268] Linking static target lib/librte_telemetry.a 00:04:19.019 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:19.019 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:19.019 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:19.019 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:19.019 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:19.019 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:19.019 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.277 [20/268] Linking target lib/librte_log.so.24.1 00:04:19.534 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:19.534 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:19.534 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:19.791 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:19.791 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:19.791 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:19.791 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:19.791 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.791 [29/268] Linking target lib/librte_telemetry.so.24.1 00:04:19.791 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:19.791 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:20.049 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:20.049 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:20.049 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:20.049 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:20.049 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:20.306 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:20.564 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:20.564 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:20.564 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:20.821 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:20.822 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:20.822 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:20.822 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:20.822 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:21.079 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:21.079 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:21.079 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:21.079 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:21.337 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:21.337 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:21.617 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:21.617 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:21.617 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:21.617 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:21.879 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:21.879 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:21.879 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:22.144 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:22.144 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:22.144 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:22.402 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:22.402 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:22.660 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:22.660 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:22.660 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:22.918 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:22.918 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:22.918 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:22.918 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:22.918 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:22.918 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:23.176 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:23.176 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:23.176 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:23.176 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:23.176 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:23.434 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:23.434 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:23.434 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:23.434 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:23.434 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:23.691 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:23.691 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:23.691 [85/268] Linking static target lib/librte_ring.a 00:04:23.691 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:23.950 [87/268] Linking static target lib/librte_eal.a 00:04:23.950 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:23.950 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:23.950 [90/268] Linking static target lib/librte_rcu.a 00:04:23.950 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:23.950 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:24.208 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:24.208 [94/268] Linking static target lib/librte_mempool.a 00:04:24.208 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.208 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:24.465 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.465 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:24.723 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:24.723 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:24.723 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:24.723 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:24.723 [103/268] Linking static target lib/librte_mbuf.a 00:04:24.723 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:24.981 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:25.238 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:25.238 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:25.238 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:25.238 [109/268] Linking static target lib/librte_net.a 00:04:25.496 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.496 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:25.496 [112/268] Linking static target lib/librte_meter.a 00:04:25.496 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:25.496 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:25.755 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:25.755 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.013 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.013 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:26.013 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.271 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:26.271 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:26.885 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:26.885 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:26.885 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:26.885 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:26.885 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:26.885 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:26.885 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:27.143 [129/268] Linking static target lib/librte_pci.a 00:04:27.143 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:27.143 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:27.143 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:27.143 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:27.143 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:27.143 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:27.143 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:27.401 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:27.401 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:27.401 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:27.401 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:27.401 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:27.401 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:27.401 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.401 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:27.401 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:27.659 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:27.659 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:27.659 [148/268] Linking static target lib/librte_ethdev.a 00:04:27.918 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:27.918 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:27.918 [151/268] Linking static target lib/librte_cmdline.a 00:04:27.918 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:28.177 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:28.177 [154/268] Linking static target lib/librte_timer.a 00:04:28.435 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:28.435 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:28.435 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:28.435 [158/268] Linking static target lib/librte_hash.a 00:04:28.435 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:28.692 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:28.692 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:28.692 [162/268] Linking static target lib/librte_compressdev.a 00:04:28.951 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.951 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:29.209 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:29.209 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:29.209 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:29.209 [168/268] Linking static target lib/librte_dmadev.a 00:04:29.209 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:29.466 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:29.466 [171/268] Linking static target lib/librte_cryptodev.a 00:04:29.466 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.724 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:29.724 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:29.724 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.724 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.724 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:29.982 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:30.240 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.240 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:30.240 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:30.240 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:30.240 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:30.240 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:30.563 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:30.563 [186/268] Linking static target lib/librte_power.a 00:04:30.821 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:30.821 [188/268] Linking static target lib/librte_reorder.a 00:04:31.078 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:31.078 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:31.078 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:31.078 [192/268] Linking static target lib/librte_security.a 00:04:31.337 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.337 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:31.337 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:31.594 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.852 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.852 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:32.110 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:32.110 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.110 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:32.110 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:32.368 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:32.368 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:32.626 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:32.884 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:32.884 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:32.884 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:32.884 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:32.884 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:32.884 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:32.884 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:33.143 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:33.143 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:33.143 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:33.143 [216/268] Linking static target drivers/librte_bus_vdev.a 00:04:33.143 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:33.143 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:33.143 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:33.143 [220/268] Linking static target drivers/librte_bus_pci.a 00:04:33.143 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:33.143 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:33.403 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.403 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:33.403 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:33.403 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:33.403 [227/268] Linking static target drivers/librte_mempool_ring.a 00:04:33.663 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.598 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:34.598 [230/268] Linking static target lib/librte_vhost.a 00:04:35.165 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.165 [232/268] Linking target lib/librte_eal.so.24.1 00:04:35.430 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:35.430 [234/268] Linking target lib/librte_timer.so.24.1 00:04:35.430 [235/268] Linking target lib/librte_pci.so.24.1 00:04:35.430 [236/268] Linking target lib/librte_ring.so.24.1 00:04:35.430 [237/268] Linking target lib/librte_meter.so.24.1 00:04:35.430 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:35.430 [239/268] Linking target lib/librte_dmadev.so.24.1 00:04:35.717 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:35.717 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:35.717 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:35.717 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:35.717 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:35.717 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:35.717 [246/268] Linking target lib/librte_rcu.so.24.1 00:04:35.717 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:35.717 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:35.717 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:35.717 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.975 [251/268] Linking target lib/librte_mbuf.so.24.1 00:04:35.975 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:35.975 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:35.975 [254/268] Linking target lib/librte_net.so.24.1 00:04:35.975 [255/268] Linking target lib/librte_compressdev.so.24.1 00:04:35.975 [256/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.975 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:04:35.975 [258/268] Linking target lib/librte_reorder.so.24.1 00:04:36.232 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:36.232 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:36.232 [261/268] Linking target lib/librte_hash.so.24.1 00:04:36.232 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:36.232 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:36.232 [264/268] Linking target lib/librte_security.so.24.1 00:04:36.489 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:36.489 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:36.489 [267/268] Linking target lib/librte_vhost.so.24.1 00:04:36.489 [268/268] Linking target lib/librte_power.so.24.1 00:04:36.489 INFO: autodetecting backend as ninja 00:04:36.489 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:03.025 CC lib/ut_mock/mock.o 00:05:03.025 CC lib/log/log.o 00:05:03.025 CC lib/log/log_flags.o 00:05:03.025 CC lib/log/log_deprecated.o 00:05:03.025 CC lib/ut/ut.o 00:05:03.025 LIB libspdk_log.a 00:05:03.025 LIB libspdk_ut_mock.a 00:05:03.025 LIB libspdk_ut.a 00:05:03.025 SO libspdk_log.so.7.0 00:05:03.025 SO libspdk_ut_mock.so.6.0 00:05:03.025 SO libspdk_ut.so.2.0 00:05:03.025 SYMLINK libspdk_ut_mock.so 00:05:03.025 SYMLINK libspdk_log.so 00:05:03.025 SYMLINK libspdk_ut.so 00:05:03.025 CXX lib/trace_parser/trace.o 00:05:03.025 CC lib/dma/dma.o 00:05:03.025 CC lib/ioat/ioat.o 00:05:03.025 CC lib/util/base64.o 00:05:03.025 CC lib/util/cpuset.o 00:05:03.025 CC lib/util/bit_array.o 00:05:03.025 CC lib/util/crc16.o 00:05:03.025 CC lib/util/crc32.o 00:05:03.025 CC lib/util/crc32c.o 00:05:03.025 CC lib/util/crc32_ieee.o 00:05:03.025 CC lib/vfio_user/host/vfio_user_pci.o 00:05:03.025 CC lib/util/crc64.o 00:05:03.025 CC lib/util/dif.o 00:05:03.025 CC lib/vfio_user/host/vfio_user.o 00:05:03.025 CC lib/util/fd.o 00:05:03.025 CC lib/util/fd_group.o 00:05:03.025 CC lib/util/file.o 00:05:03.025 CC lib/util/hexlify.o 00:05:03.025 LIB libspdk_dma.a 00:05:03.025 LIB libspdk_ioat.a 00:05:03.025 SO libspdk_dma.so.5.0 00:05:03.025 SO libspdk_ioat.so.7.0 00:05:03.025 CC lib/util/iov.o 00:05:03.025 CC lib/util/math.o 00:05:03.025 LIB libspdk_vfio_user.a 00:05:03.025 CC lib/util/net.o 00:05:03.025 SYMLINK libspdk_ioat.so 00:05:03.025 SYMLINK libspdk_dma.so 00:05:03.025 CC lib/util/pipe.o 00:05:03.025 CC lib/util/strerror_tls.o 00:05:03.025 CC lib/util/string.o 00:05:03.025 SO libspdk_vfio_user.so.5.0 00:05:03.025 CC lib/util/uuid.o 00:05:03.025 SYMLINK libspdk_vfio_user.so 00:05:03.025 CC lib/util/xor.o 00:05:03.025 CC lib/util/zipf.o 00:05:03.025 CC lib/util/md5.o 00:05:03.284 LIB libspdk_util.a 00:05:03.284 SO libspdk_util.so.10.0 00:05:03.543 SYMLINK libspdk_util.so 00:05:03.543 LIB libspdk_trace_parser.a 00:05:03.543 SO libspdk_trace_parser.so.6.0 00:05:03.802 CC lib/conf/conf.o 00:05:03.802 CC lib/rdma_utils/rdma_utils.o 00:05:03.802 CC lib/vmd/vmd.o 00:05:03.802 CC lib/json/json_parse.o 00:05:03.802 CC lib/json/json_util.o 00:05:03.802 CC lib/json/json_write.o 00:05:03.802 CC lib/idxd/idxd.o 00:05:03.802 CC lib/rdma_provider/common.o 00:05:03.802 CC lib/env_dpdk/env.o 00:05:03.802 SYMLINK libspdk_trace_parser.so 00:05:03.802 CC lib/env_dpdk/memory.o 00:05:03.802 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:03.802 LIB libspdk_conf.a 00:05:04.061 SO libspdk_conf.so.6.0 00:05:04.061 CC lib/env_dpdk/pci.o 00:05:04.061 CC lib/env_dpdk/init.o 00:05:04.061 LIB libspdk_rdma_utils.a 00:05:04.061 LIB libspdk_json.a 00:05:04.061 SO libspdk_rdma_utils.so.1.0 00:05:04.061 SYMLINK libspdk_conf.so 00:05:04.061 CC lib/env_dpdk/threads.o 00:05:04.061 SO libspdk_json.so.6.0 00:05:04.061 SYMLINK libspdk_rdma_utils.so 00:05:04.061 CC lib/vmd/led.o 00:05:04.061 SYMLINK libspdk_json.so 00:05:04.061 CC lib/idxd/idxd_user.o 00:05:04.061 LIB libspdk_rdma_provider.a 00:05:04.061 SO libspdk_rdma_provider.so.6.0 00:05:04.353 SYMLINK libspdk_rdma_provider.so 00:05:04.353 CC lib/env_dpdk/pci_ioat.o 00:05:04.353 CC lib/env_dpdk/pci_virtio.o 00:05:04.353 CC lib/env_dpdk/pci_vmd.o 00:05:04.353 LIB libspdk_vmd.a 00:05:04.353 CC lib/env_dpdk/pci_idxd.o 00:05:04.353 CC lib/idxd/idxd_kernel.o 00:05:04.353 CC lib/env_dpdk/pci_event.o 00:05:04.353 CC lib/jsonrpc/jsonrpc_server.o 00:05:04.353 CC lib/env_dpdk/sigbus_handler.o 00:05:04.353 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:04.353 SO libspdk_vmd.so.6.0 00:05:04.353 SYMLINK libspdk_vmd.so 00:05:04.353 CC lib/env_dpdk/pci_dpdk.o 00:05:04.353 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:04.611 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:04.611 CC lib/jsonrpc/jsonrpc_client.o 00:05:04.611 LIB libspdk_idxd.a 00:05:04.611 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:04.611 SO libspdk_idxd.so.12.1 00:05:04.611 SYMLINK libspdk_idxd.so 00:05:04.870 LIB libspdk_jsonrpc.a 00:05:04.870 SO libspdk_jsonrpc.so.6.0 00:05:04.870 SYMLINK libspdk_jsonrpc.so 00:05:05.127 CC lib/rpc/rpc.o 00:05:05.127 LIB libspdk_env_dpdk.a 00:05:05.386 SO libspdk_env_dpdk.so.15.0 00:05:05.386 LIB libspdk_rpc.a 00:05:05.386 SYMLINK libspdk_env_dpdk.so 00:05:05.386 SO libspdk_rpc.so.6.0 00:05:05.644 SYMLINK libspdk_rpc.so 00:05:05.644 CC lib/trace/trace.o 00:05:05.644 CC lib/trace/trace_flags.o 00:05:05.644 CC lib/trace/trace_rpc.o 00:05:05.644 CC lib/notify/notify_rpc.o 00:05:05.644 CC lib/notify/notify.o 00:05:05.644 CC lib/keyring/keyring.o 00:05:05.644 CC lib/keyring/keyring_rpc.o 00:05:05.902 LIB libspdk_notify.a 00:05:05.902 SO libspdk_notify.so.6.0 00:05:05.902 LIB libspdk_trace.a 00:05:06.161 SYMLINK libspdk_notify.so 00:05:06.161 SO libspdk_trace.so.11.0 00:05:06.161 LIB libspdk_keyring.a 00:05:06.161 SO libspdk_keyring.so.2.0 00:05:06.161 SYMLINK libspdk_trace.so 00:05:06.161 SYMLINK libspdk_keyring.so 00:05:06.419 CC lib/thread/iobuf.o 00:05:06.419 CC lib/thread/thread.o 00:05:06.419 CC lib/sock/sock.o 00:05:06.419 CC lib/sock/sock_rpc.o 00:05:06.986 LIB libspdk_sock.a 00:05:06.986 SO libspdk_sock.so.10.0 00:05:06.986 SYMLINK libspdk_sock.so 00:05:07.243 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:07.244 CC lib/nvme/nvme_fabric.o 00:05:07.244 CC lib/nvme/nvme_ctrlr.o 00:05:07.244 CC lib/nvme/nvme_ns_cmd.o 00:05:07.244 CC lib/nvme/nvme_pcie.o 00:05:07.244 CC lib/nvme/nvme_ns.o 00:05:07.244 CC lib/nvme/nvme_pcie_common.o 00:05:07.244 CC lib/nvme/nvme_qpair.o 00:05:07.244 CC lib/nvme/nvme.o 00:05:08.185 LIB libspdk_thread.a 00:05:08.185 CC lib/nvme/nvme_quirks.o 00:05:08.185 SO libspdk_thread.so.10.1 00:05:08.185 CC lib/nvme/nvme_transport.o 00:05:08.185 SYMLINK libspdk_thread.so 00:05:08.185 CC lib/nvme/nvme_discovery.o 00:05:08.185 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:08.185 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:08.464 CC lib/nvme/nvme_tcp.o 00:05:08.464 CC lib/accel/accel.o 00:05:08.464 CC lib/blob/blobstore.o 00:05:08.464 CC lib/blob/request.o 00:05:08.722 CC lib/blob/zeroes.o 00:05:08.980 CC lib/blob/blob_bs_dev.o 00:05:08.980 CC lib/nvme/nvme_opal.o 00:05:08.980 CC lib/init/json_config.o 00:05:08.980 CC lib/virtio/virtio.o 00:05:08.980 CC lib/fsdev/fsdev.o 00:05:09.238 CC lib/fsdev/fsdev_io.o 00:05:09.238 CC lib/accel/accel_rpc.o 00:05:09.238 CC lib/init/subsystem.o 00:05:09.497 CC lib/nvme/nvme_io_msg.o 00:05:09.497 CC lib/virtio/virtio_vhost_user.o 00:05:09.497 CC lib/init/subsystem_rpc.o 00:05:09.497 CC lib/fsdev/fsdev_rpc.o 00:05:09.497 CC lib/init/rpc.o 00:05:09.497 CC lib/accel/accel_sw.o 00:05:09.497 CC lib/nvme/nvme_poll_group.o 00:05:09.756 CC lib/virtio/virtio_vfio_user.o 00:05:09.756 CC lib/virtio/virtio_pci.o 00:05:09.756 LIB libspdk_init.a 00:05:09.756 LIB libspdk_fsdev.a 00:05:09.756 CC lib/nvme/nvme_zns.o 00:05:09.756 SO libspdk_init.so.6.0 00:05:09.756 SO libspdk_fsdev.so.1.0 00:05:09.756 SYMLINK libspdk_init.so 00:05:09.756 CC lib/nvme/nvme_stubs.o 00:05:09.756 SYMLINK libspdk_fsdev.so 00:05:10.015 LIB libspdk_accel.a 00:05:10.015 LIB libspdk_virtio.a 00:05:10.015 CC lib/nvme/nvme_auth.o 00:05:10.015 SO libspdk_accel.so.16.0 00:05:10.015 SO libspdk_virtio.so.7.0 00:05:10.015 CC lib/event/app.o 00:05:10.015 SYMLINK libspdk_accel.so 00:05:10.015 CC lib/event/reactor.o 00:05:10.015 CC lib/event/log_rpc.o 00:05:10.015 SYMLINK libspdk_virtio.so 00:05:10.015 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:10.273 CC lib/bdev/bdev.o 00:05:10.273 CC lib/bdev/bdev_rpc.o 00:05:10.273 CC lib/bdev/bdev_zone.o 00:05:10.273 CC lib/nvme/nvme_cuse.o 00:05:10.273 CC lib/nvme/nvme_rdma.o 00:05:10.531 CC lib/event/app_rpc.o 00:05:10.531 CC lib/event/scheduler_static.o 00:05:10.531 CC lib/bdev/part.o 00:05:10.531 CC lib/bdev/scsi_nvme.o 00:05:10.790 LIB libspdk_event.a 00:05:10.790 LIB libspdk_fuse_dispatcher.a 00:05:10.790 SO libspdk_fuse_dispatcher.so.1.0 00:05:10.790 SO libspdk_event.so.14.0 00:05:10.790 SYMLINK libspdk_fuse_dispatcher.so 00:05:10.790 SYMLINK libspdk_event.so 00:05:11.725 LIB libspdk_blob.a 00:05:11.725 SO libspdk_blob.so.11.0 00:05:11.725 LIB libspdk_nvme.a 00:05:11.725 SYMLINK libspdk_blob.so 00:05:11.984 SO libspdk_nvme.so.14.0 00:05:11.984 CC lib/lvol/lvol.o 00:05:11.984 CC lib/blobfs/blobfs.o 00:05:11.984 CC lib/blobfs/tree.o 00:05:12.242 SYMLINK libspdk_nvme.so 00:05:12.815 LIB libspdk_blobfs.a 00:05:12.815 SO libspdk_blobfs.so.10.0 00:05:13.074 LIB libspdk_bdev.a 00:05:13.074 SYMLINK libspdk_blobfs.so 00:05:13.074 SO libspdk_bdev.so.16.0 00:05:13.074 LIB libspdk_lvol.a 00:05:13.074 SO libspdk_lvol.so.10.0 00:05:13.074 SYMLINK libspdk_bdev.so 00:05:13.074 SYMLINK libspdk_lvol.so 00:05:13.333 CC lib/nbd/nbd.o 00:05:13.333 CC lib/scsi/dev.o 00:05:13.333 CC lib/nbd/nbd_rpc.o 00:05:13.333 CC lib/scsi/lun.o 00:05:13.333 CC lib/scsi/port.o 00:05:13.333 CC lib/scsi/scsi.o 00:05:13.333 CC lib/ftl/ftl_core.o 00:05:13.333 CC lib/nvmf/ctrlr_discovery.o 00:05:13.333 CC lib/nvmf/ctrlr.o 00:05:13.333 CC lib/ublk/ublk.o 00:05:13.591 CC lib/scsi/scsi_bdev.o 00:05:13.591 CC lib/scsi/scsi_pr.o 00:05:13.591 CC lib/scsi/scsi_rpc.o 00:05:13.591 CC lib/ublk/ublk_rpc.o 00:05:13.849 CC lib/scsi/task.o 00:05:13.849 CC lib/ftl/ftl_init.o 00:05:13.849 CC lib/ftl/ftl_layout.o 00:05:13.849 CC lib/ftl/ftl_debug.o 00:05:13.849 LIB libspdk_nbd.a 00:05:13.849 SO libspdk_nbd.so.7.0 00:05:13.849 CC lib/ftl/ftl_io.o 00:05:13.849 SYMLINK libspdk_nbd.so 00:05:13.849 CC lib/ftl/ftl_sb.o 00:05:13.849 CC lib/nvmf/ctrlr_bdev.o 00:05:14.108 CC lib/nvmf/subsystem.o 00:05:14.108 LIB libspdk_scsi.a 00:05:14.108 CC lib/ftl/ftl_l2p.o 00:05:14.108 LIB libspdk_ublk.a 00:05:14.108 CC lib/ftl/ftl_l2p_flat.o 00:05:14.108 SO libspdk_scsi.so.9.0 00:05:14.108 SO libspdk_ublk.so.3.0 00:05:14.108 CC lib/ftl/ftl_nv_cache.o 00:05:14.108 CC lib/ftl/ftl_band.o 00:05:14.108 SYMLINK libspdk_ublk.so 00:05:14.108 CC lib/ftl/ftl_band_ops.o 00:05:14.108 CC lib/ftl/ftl_writer.o 00:05:14.108 SYMLINK libspdk_scsi.so 00:05:14.108 CC lib/ftl/ftl_rq.o 00:05:14.366 CC lib/ftl/ftl_reloc.o 00:05:14.366 CC lib/iscsi/conn.o 00:05:14.366 CC lib/ftl/ftl_l2p_cache.o 00:05:14.625 CC lib/vhost/vhost.o 00:05:14.625 CC lib/ftl/ftl_p2l.o 00:05:14.625 CC lib/ftl/ftl_p2l_log.o 00:05:14.625 CC lib/ftl/mngt/ftl_mngt.o 00:05:14.883 CC lib/vhost/vhost_rpc.o 00:05:14.883 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:14.883 CC lib/nvmf/nvmf.o 00:05:15.141 CC lib/nvmf/nvmf_rpc.o 00:05:15.141 CC lib/nvmf/transport.o 00:05:15.141 CC lib/iscsi/init_grp.o 00:05:15.141 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:15.141 CC lib/nvmf/tcp.o 00:05:15.141 CC lib/nvmf/stubs.o 00:05:15.141 CC lib/nvmf/mdns_server.o 00:05:15.400 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:15.400 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:15.400 CC lib/iscsi/iscsi.o 00:05:15.400 CC lib/vhost/vhost_scsi.o 00:05:15.400 CC lib/nvmf/rdma.o 00:05:15.658 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:15.658 CC lib/nvmf/auth.o 00:05:15.658 CC lib/iscsi/param.o 00:05:15.658 CC lib/iscsi/portal_grp.o 00:05:15.916 CC lib/iscsi/tgt_node.o 00:05:15.916 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:15.916 CC lib/vhost/vhost_blk.o 00:05:16.175 CC lib/vhost/rte_vhost_user.o 00:05:16.175 CC lib/iscsi/iscsi_subsystem.o 00:05:16.175 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:16.432 CC lib/iscsi/iscsi_rpc.o 00:05:16.432 CC lib/iscsi/task.o 00:05:16.432 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:16.432 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:16.432 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:16.691 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:16.691 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:16.691 CC lib/ftl/utils/ftl_conf.o 00:05:16.691 CC lib/ftl/utils/ftl_md.o 00:05:16.691 CC lib/ftl/utils/ftl_mempool.o 00:05:16.691 CC lib/ftl/utils/ftl_bitmap.o 00:05:16.949 LIB libspdk_iscsi.a 00:05:16.949 CC lib/ftl/utils/ftl_property.o 00:05:16.949 SO libspdk_iscsi.so.8.0 00:05:16.949 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:16.949 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:16.949 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:17.214 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:17.214 SYMLINK libspdk_iscsi.so 00:05:17.214 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:17.214 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:17.214 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:17.214 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:17.214 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:17.214 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:17.214 LIB libspdk_vhost.a 00:05:17.214 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:17.214 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:17.479 SO libspdk_vhost.so.8.0 00:05:17.479 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:17.479 CC lib/ftl/base/ftl_base_dev.o 00:05:17.479 CC lib/ftl/base/ftl_base_bdev.o 00:05:17.479 CC lib/ftl/ftl_trace.o 00:05:17.479 SYMLINK libspdk_vhost.so 00:05:17.737 LIB libspdk_nvmf.a 00:05:17.737 LIB libspdk_ftl.a 00:05:17.737 SO libspdk_nvmf.so.19.0 00:05:17.996 SO libspdk_ftl.so.9.0 00:05:17.996 SYMLINK libspdk_nvmf.so 00:05:18.254 SYMLINK libspdk_ftl.so 00:05:18.821 CC module/env_dpdk/env_dpdk_rpc.o 00:05:18.821 CC module/sock/posix/posix.o 00:05:18.821 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:18.821 CC module/accel/error/accel_error.o 00:05:18.821 CC module/scheduler/gscheduler/gscheduler.o 00:05:18.821 CC module/fsdev/aio/fsdev_aio.o 00:05:18.821 CC module/blob/bdev/blob_bdev.o 00:05:18.821 CC module/sock/uring/uring.o 00:05:18.821 CC module/keyring/file/keyring.o 00:05:18.821 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:18.821 LIB libspdk_env_dpdk_rpc.a 00:05:18.821 SO libspdk_env_dpdk_rpc.so.6.0 00:05:18.821 SYMLINK libspdk_env_dpdk_rpc.so 00:05:18.821 LIB libspdk_scheduler_gscheduler.a 00:05:18.821 CC module/keyring/file/keyring_rpc.o 00:05:18.821 LIB libspdk_scheduler_dpdk_governor.a 00:05:18.821 CC module/accel/error/accel_error_rpc.o 00:05:18.821 SO libspdk_scheduler_gscheduler.so.4.0 00:05:19.079 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:19.079 LIB libspdk_scheduler_dynamic.a 00:05:19.079 SO libspdk_scheduler_dynamic.so.4.0 00:05:19.079 SYMLINK libspdk_scheduler_gscheduler.so 00:05:19.079 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:19.079 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:19.079 CC module/fsdev/aio/linux_aio_mgr.o 00:05:19.079 SYMLINK libspdk_scheduler_dynamic.so 00:05:19.079 LIB libspdk_blob_bdev.a 00:05:19.079 LIB libspdk_keyring_file.a 00:05:19.079 LIB libspdk_accel_error.a 00:05:19.079 SO libspdk_blob_bdev.so.11.0 00:05:19.079 SO libspdk_keyring_file.so.2.0 00:05:19.079 SO libspdk_accel_error.so.2.0 00:05:19.338 SYMLINK libspdk_keyring_file.so 00:05:19.338 SYMLINK libspdk_accel_error.so 00:05:19.338 SYMLINK libspdk_blob_bdev.so 00:05:19.338 CC module/keyring/linux/keyring.o 00:05:19.338 CC module/keyring/linux/keyring_rpc.o 00:05:19.338 CC module/accel/ioat/accel_ioat.o 00:05:19.338 CC module/accel/iaa/accel_iaa.o 00:05:19.338 LIB libspdk_fsdev_aio.a 00:05:19.338 CC module/accel/dsa/accel_dsa.o 00:05:19.338 CC module/accel/dsa/accel_dsa_rpc.o 00:05:19.597 SO libspdk_fsdev_aio.so.1.0 00:05:19.597 LIB libspdk_keyring_linux.a 00:05:19.597 SO libspdk_keyring_linux.so.1.0 00:05:19.597 CC module/blobfs/bdev/blobfs_bdev.o 00:05:19.597 CC module/bdev/delay/vbdev_delay.o 00:05:19.597 LIB libspdk_sock_uring.a 00:05:19.597 CC module/accel/ioat/accel_ioat_rpc.o 00:05:19.597 SYMLINK libspdk_fsdev_aio.so 00:05:19.597 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:19.597 LIB libspdk_sock_posix.a 00:05:19.597 SO libspdk_sock_uring.so.5.0 00:05:19.597 SYMLINK libspdk_keyring_linux.so 00:05:19.597 CC module/accel/iaa/accel_iaa_rpc.o 00:05:19.597 SO libspdk_sock_posix.so.6.0 00:05:19.597 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:19.597 SYMLINK libspdk_sock_uring.so 00:05:19.597 LIB libspdk_accel_ioat.a 00:05:19.597 SYMLINK libspdk_sock_posix.so 00:05:19.856 LIB libspdk_accel_dsa.a 00:05:19.856 SO libspdk_accel_ioat.so.6.0 00:05:19.856 LIB libspdk_blobfs_bdev.a 00:05:19.856 LIB libspdk_accel_iaa.a 00:05:19.856 SO libspdk_accel_dsa.so.5.0 00:05:19.856 SO libspdk_accel_iaa.so.3.0 00:05:19.856 SO libspdk_blobfs_bdev.so.6.0 00:05:19.856 SYMLINK libspdk_accel_ioat.so 00:05:19.856 CC module/bdev/error/vbdev_error.o 00:05:19.856 SYMLINK libspdk_accel_dsa.so 00:05:19.856 CC module/bdev/gpt/gpt.o 00:05:19.856 SYMLINK libspdk_accel_iaa.so 00:05:19.856 CC module/bdev/gpt/vbdev_gpt.o 00:05:19.856 SYMLINK libspdk_blobfs_bdev.so 00:05:19.856 CC module/bdev/error/vbdev_error_rpc.o 00:05:19.856 CC module/bdev/lvol/vbdev_lvol.o 00:05:19.856 CC module/bdev/malloc/bdev_malloc.o 00:05:19.856 LIB libspdk_bdev_delay.a 00:05:20.116 SO libspdk_bdev_delay.so.6.0 00:05:20.116 CC module/bdev/null/bdev_null.o 00:05:20.116 CC module/bdev/nvme/bdev_nvme.o 00:05:20.116 CC module/bdev/passthru/vbdev_passthru.o 00:05:20.116 CC module/bdev/null/bdev_null_rpc.o 00:05:20.116 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:20.116 SYMLINK libspdk_bdev_delay.so 00:05:20.116 CC module/bdev/nvme/nvme_rpc.o 00:05:20.116 LIB libspdk_bdev_error.a 00:05:20.116 LIB libspdk_bdev_gpt.a 00:05:20.116 SO libspdk_bdev_error.so.6.0 00:05:20.116 SO libspdk_bdev_gpt.so.6.0 00:05:20.116 SYMLINK libspdk_bdev_error.so 00:05:20.116 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:20.116 SYMLINK libspdk_bdev_gpt.so 00:05:20.116 CC module/bdev/nvme/bdev_mdns_client.o 00:05:20.376 CC module/bdev/nvme/vbdev_opal.o 00:05:20.376 LIB libspdk_bdev_null.a 00:05:20.376 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:20.376 SO libspdk_bdev_null.so.6.0 00:05:20.376 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:20.376 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:20.376 SYMLINK libspdk_bdev_null.so 00:05:20.376 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:20.376 LIB libspdk_bdev_malloc.a 00:05:20.376 SO libspdk_bdev_malloc.so.6.0 00:05:20.376 LIB libspdk_bdev_passthru.a 00:05:20.635 CC module/bdev/raid/bdev_raid.o 00:05:20.635 SO libspdk_bdev_passthru.so.6.0 00:05:20.635 SYMLINK libspdk_bdev_malloc.so 00:05:20.635 CC module/bdev/raid/bdev_raid_rpc.o 00:05:20.635 CC module/bdev/raid/bdev_raid_sb.o 00:05:20.635 LIB libspdk_bdev_lvol.a 00:05:20.635 SYMLINK libspdk_bdev_passthru.so 00:05:20.635 CC module/bdev/split/vbdev_split.o 00:05:20.635 SO libspdk_bdev_lvol.so.6.0 00:05:20.635 SYMLINK libspdk_bdev_lvol.so 00:05:20.635 CC module/bdev/split/vbdev_split_rpc.o 00:05:20.635 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:20.893 CC module/bdev/aio/bdev_aio.o 00:05:20.893 CC module/bdev/uring/bdev_uring.o 00:05:20.893 CC module/bdev/aio/bdev_aio_rpc.o 00:05:20.893 CC module/bdev/raid/raid0.o 00:05:20.893 CC module/bdev/uring/bdev_uring_rpc.o 00:05:20.893 LIB libspdk_bdev_split.a 00:05:20.893 CC module/bdev/ftl/bdev_ftl.o 00:05:20.893 SO libspdk_bdev_split.so.6.0 00:05:21.151 SYMLINK libspdk_bdev_split.so 00:05:21.151 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:21.151 CC module/bdev/raid/raid1.o 00:05:21.151 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:21.151 CC module/bdev/raid/concat.o 00:05:21.151 LIB libspdk_bdev_aio.a 00:05:21.151 LIB libspdk_bdev_uring.a 00:05:21.151 SO libspdk_bdev_aio.so.6.0 00:05:21.151 SO libspdk_bdev_uring.so.6.0 00:05:21.151 SYMLINK libspdk_bdev_aio.so 00:05:21.410 SYMLINK libspdk_bdev_uring.so 00:05:21.410 LIB libspdk_bdev_zone_block.a 00:05:21.410 LIB libspdk_bdev_ftl.a 00:05:21.410 SO libspdk_bdev_zone_block.so.6.0 00:05:21.410 SO libspdk_bdev_ftl.so.6.0 00:05:21.410 SYMLINK libspdk_bdev_ftl.so 00:05:21.410 SYMLINK libspdk_bdev_zone_block.so 00:05:21.410 CC module/bdev/iscsi/bdev_iscsi.o 00:05:21.410 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:21.410 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:21.410 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:21.410 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:21.669 LIB libspdk_bdev_raid.a 00:05:21.669 SO libspdk_bdev_raid.so.6.0 00:05:21.669 SYMLINK libspdk_bdev_raid.so 00:05:21.928 LIB libspdk_bdev_iscsi.a 00:05:21.928 SO libspdk_bdev_iscsi.so.6.0 00:05:21.928 SYMLINK libspdk_bdev_iscsi.so 00:05:21.928 LIB libspdk_bdev_virtio.a 00:05:21.928 SO libspdk_bdev_virtio.so.6.0 00:05:22.187 SYMLINK libspdk_bdev_virtio.so 00:05:22.446 LIB libspdk_bdev_nvme.a 00:05:22.446 SO libspdk_bdev_nvme.so.7.0 00:05:22.446 SYMLINK libspdk_bdev_nvme.so 00:05:23.013 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:23.013 CC module/event/subsystems/vmd/vmd.o 00:05:23.013 CC module/event/subsystems/fsdev/fsdev.o 00:05:23.013 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:23.013 CC module/event/subsystems/iobuf/iobuf.o 00:05:23.013 CC module/event/subsystems/keyring/keyring.o 00:05:23.013 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:23.013 CC module/event/subsystems/sock/sock.o 00:05:23.013 CC module/event/subsystems/scheduler/scheduler.o 00:05:23.271 LIB libspdk_event_sock.a 00:05:23.271 LIB libspdk_event_scheduler.a 00:05:23.271 LIB libspdk_event_vhost_blk.a 00:05:23.271 LIB libspdk_event_fsdev.a 00:05:23.271 LIB libspdk_event_vmd.a 00:05:23.271 SO libspdk_event_sock.so.5.0 00:05:23.271 SO libspdk_event_scheduler.so.4.0 00:05:23.271 LIB libspdk_event_keyring.a 00:05:23.271 SO libspdk_event_vhost_blk.so.3.0 00:05:23.271 LIB libspdk_event_iobuf.a 00:05:23.271 SO libspdk_event_fsdev.so.1.0 00:05:23.271 SO libspdk_event_vmd.so.6.0 00:05:23.271 SO libspdk_event_keyring.so.1.0 00:05:23.271 SO libspdk_event_iobuf.so.3.0 00:05:23.271 SYMLINK libspdk_event_scheduler.so 00:05:23.271 SYMLINK libspdk_event_sock.so 00:05:23.272 SYMLINK libspdk_event_vhost_blk.so 00:05:23.272 SYMLINK libspdk_event_fsdev.so 00:05:23.272 SYMLINK libspdk_event_keyring.so 00:05:23.272 SYMLINK libspdk_event_vmd.so 00:05:23.272 SYMLINK libspdk_event_iobuf.so 00:05:23.529 CC module/event/subsystems/accel/accel.o 00:05:23.787 LIB libspdk_event_accel.a 00:05:23.787 SO libspdk_event_accel.so.6.0 00:05:23.787 SYMLINK libspdk_event_accel.so 00:05:24.045 CC module/event/subsystems/bdev/bdev.o 00:05:24.302 LIB libspdk_event_bdev.a 00:05:24.302 SO libspdk_event_bdev.so.6.0 00:05:24.560 SYMLINK libspdk_event_bdev.so 00:05:24.819 CC module/event/subsystems/ublk/ublk.o 00:05:24.819 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:24.819 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:24.819 CC module/event/subsystems/scsi/scsi.o 00:05:24.819 CC module/event/subsystems/nbd/nbd.o 00:05:24.819 LIB libspdk_event_ublk.a 00:05:24.819 LIB libspdk_event_scsi.a 00:05:24.819 LIB libspdk_event_nbd.a 00:05:24.819 SO libspdk_event_scsi.so.6.0 00:05:24.819 SO libspdk_event_ublk.so.3.0 00:05:24.819 SO libspdk_event_nbd.so.6.0 00:05:25.077 SYMLINK libspdk_event_ublk.so 00:05:25.077 SYMLINK libspdk_event_nbd.so 00:05:25.077 LIB libspdk_event_nvmf.a 00:05:25.077 SYMLINK libspdk_event_scsi.so 00:05:25.077 SO libspdk_event_nvmf.so.6.0 00:05:25.077 SYMLINK libspdk_event_nvmf.so 00:05:25.336 CC module/event/subsystems/iscsi/iscsi.o 00:05:25.336 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:25.336 LIB libspdk_event_vhost_scsi.a 00:05:25.595 LIB libspdk_event_iscsi.a 00:05:25.595 SO libspdk_event_vhost_scsi.so.3.0 00:05:25.595 SO libspdk_event_iscsi.so.6.0 00:05:25.595 SYMLINK libspdk_event_vhost_scsi.so 00:05:25.595 SYMLINK libspdk_event_iscsi.so 00:05:25.854 SO libspdk.so.6.0 00:05:25.854 SYMLINK libspdk.so 00:05:25.854 CC app/spdk_lspci/spdk_lspci.o 00:05:25.854 CXX app/trace/trace.o 00:05:25.854 CC app/spdk_nvme_perf/perf.o 00:05:25.854 CC app/trace_record/trace_record.o 00:05:26.112 CC app/nvmf_tgt/nvmf_main.o 00:05:26.112 CC app/iscsi_tgt/iscsi_tgt.o 00:05:26.112 CC app/spdk_tgt/spdk_tgt.o 00:05:26.112 CC examples/ioat/perf/perf.o 00:05:26.112 CC test/thread/poller_perf/poller_perf.o 00:05:26.112 CC examples/util/zipf/zipf.o 00:05:26.112 LINK spdk_lspci 00:05:26.371 LINK nvmf_tgt 00:05:26.371 LINK zipf 00:05:26.371 LINK spdk_trace_record 00:05:26.371 LINK iscsi_tgt 00:05:26.371 LINK poller_perf 00:05:26.371 LINK ioat_perf 00:05:26.371 LINK spdk_tgt 00:05:26.371 LINK spdk_trace 00:05:26.371 CC app/spdk_nvme_identify/identify.o 00:05:26.630 CC app/spdk_nvme_discover/discovery_aer.o 00:05:26.630 CC app/spdk_top/spdk_top.o 00:05:26.630 CC examples/ioat/verify/verify.o 00:05:26.630 CC app/spdk_dd/spdk_dd.o 00:05:26.630 CC test/dma/test_dma/test_dma.o 00:05:26.630 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:26.889 CC app/fio/nvme/fio_plugin.o 00:05:26.889 LINK spdk_nvme_discover 00:05:26.889 CC examples/thread/thread/thread_ex.o 00:05:26.889 LINK spdk_nvme_perf 00:05:26.889 LINK verify 00:05:26.889 LINK interrupt_tgt 00:05:27.147 CC app/vhost/vhost.o 00:05:27.147 LINK spdk_dd 00:05:27.147 LINK thread 00:05:27.147 LINK test_dma 00:05:27.147 CC examples/sock/hello_world/hello_sock.o 00:05:27.147 CC examples/vmd/lsvmd/lsvmd.o 00:05:27.405 LINK vhost 00:05:27.405 LINK spdk_nvme_identify 00:05:27.405 CC examples/idxd/perf/perf.o 00:05:27.405 LINK spdk_nvme 00:05:27.405 LINK lsvmd 00:05:27.405 CC examples/vmd/led/led.o 00:05:27.405 LINK hello_sock 00:05:27.405 CC examples/accel/perf/accel_perf.o 00:05:27.662 LINK led 00:05:27.662 CC app/fio/bdev/fio_plugin.o 00:05:27.662 LINK spdk_top 00:05:27.662 CC test/app/bdev_svc/bdev_svc.o 00:05:27.662 LINK idxd_perf 00:05:27.662 CC examples/blob/hello_world/hello_blob.o 00:05:27.662 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:27.662 CC examples/blob/cli/blobcli.o 00:05:27.662 CC test/blobfs/mkfs/mkfs.o 00:05:27.920 LINK bdev_svc 00:05:27.920 TEST_HEADER include/spdk/accel.h 00:05:27.920 TEST_HEADER include/spdk/accel_module.h 00:05:27.920 TEST_HEADER include/spdk/assert.h 00:05:27.920 TEST_HEADER include/spdk/barrier.h 00:05:27.920 TEST_HEADER include/spdk/base64.h 00:05:27.920 TEST_HEADER include/spdk/bdev.h 00:05:27.920 TEST_HEADER include/spdk/bdev_module.h 00:05:27.920 TEST_HEADER include/spdk/bdev_zone.h 00:05:27.920 TEST_HEADER include/spdk/bit_array.h 00:05:27.920 TEST_HEADER include/spdk/bit_pool.h 00:05:27.920 TEST_HEADER include/spdk/blob_bdev.h 00:05:27.920 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:27.920 TEST_HEADER include/spdk/blobfs.h 00:05:27.920 TEST_HEADER include/spdk/blob.h 00:05:27.920 TEST_HEADER include/spdk/conf.h 00:05:27.920 TEST_HEADER include/spdk/config.h 00:05:27.920 TEST_HEADER include/spdk/cpuset.h 00:05:27.920 TEST_HEADER include/spdk/crc16.h 00:05:27.920 TEST_HEADER include/spdk/crc32.h 00:05:27.920 TEST_HEADER include/spdk/crc64.h 00:05:27.920 TEST_HEADER include/spdk/dif.h 00:05:27.920 TEST_HEADER include/spdk/dma.h 00:05:27.920 TEST_HEADER include/spdk/endian.h 00:05:27.920 TEST_HEADER include/spdk/env_dpdk.h 00:05:27.920 CC examples/nvme/hello_world/hello_world.o 00:05:27.920 TEST_HEADER include/spdk/env.h 00:05:27.920 TEST_HEADER include/spdk/event.h 00:05:27.920 TEST_HEADER include/spdk/fd_group.h 00:05:27.920 TEST_HEADER include/spdk/fd.h 00:05:27.920 TEST_HEADER include/spdk/file.h 00:05:27.920 LINK hello_blob 00:05:27.920 TEST_HEADER include/spdk/fsdev.h 00:05:27.920 TEST_HEADER include/spdk/fsdev_module.h 00:05:27.920 TEST_HEADER include/spdk/ftl.h 00:05:27.920 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:27.920 TEST_HEADER include/spdk/gpt_spec.h 00:05:27.920 TEST_HEADER include/spdk/hexlify.h 00:05:27.920 TEST_HEADER include/spdk/histogram_data.h 00:05:27.920 TEST_HEADER include/spdk/idxd.h 00:05:27.920 TEST_HEADER include/spdk/idxd_spec.h 00:05:27.921 TEST_HEADER include/spdk/init.h 00:05:27.921 TEST_HEADER include/spdk/ioat.h 00:05:27.921 TEST_HEADER include/spdk/ioat_spec.h 00:05:27.921 TEST_HEADER include/spdk/iscsi_spec.h 00:05:27.921 TEST_HEADER include/spdk/json.h 00:05:27.921 TEST_HEADER include/spdk/jsonrpc.h 00:05:27.921 TEST_HEADER include/spdk/keyring.h 00:05:27.921 TEST_HEADER include/spdk/keyring_module.h 00:05:27.921 LINK mkfs 00:05:27.921 TEST_HEADER include/spdk/likely.h 00:05:27.921 TEST_HEADER include/spdk/log.h 00:05:27.921 TEST_HEADER include/spdk/lvol.h 00:05:27.921 TEST_HEADER include/spdk/md5.h 00:05:27.921 TEST_HEADER include/spdk/memory.h 00:05:27.921 TEST_HEADER include/spdk/mmio.h 00:05:27.921 TEST_HEADER include/spdk/nbd.h 00:05:27.921 TEST_HEADER include/spdk/net.h 00:05:27.921 TEST_HEADER include/spdk/notify.h 00:05:27.921 TEST_HEADER include/spdk/nvme.h 00:05:27.921 TEST_HEADER include/spdk/nvme_intel.h 00:05:27.921 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:27.921 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:27.921 TEST_HEADER include/spdk/nvme_spec.h 00:05:27.921 TEST_HEADER include/spdk/nvme_zns.h 00:05:27.921 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:27.921 LINK accel_perf 00:05:27.921 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:27.921 TEST_HEADER include/spdk/nvmf.h 00:05:27.921 TEST_HEADER include/spdk/nvmf_spec.h 00:05:27.921 TEST_HEADER include/spdk/nvmf_transport.h 00:05:27.921 TEST_HEADER include/spdk/opal.h 00:05:27.921 TEST_HEADER include/spdk/opal_spec.h 00:05:27.921 TEST_HEADER include/spdk/pci_ids.h 00:05:28.179 TEST_HEADER include/spdk/pipe.h 00:05:28.179 TEST_HEADER include/spdk/queue.h 00:05:28.179 TEST_HEADER include/spdk/reduce.h 00:05:28.179 TEST_HEADER include/spdk/rpc.h 00:05:28.179 LINK hello_fsdev 00:05:28.179 TEST_HEADER include/spdk/scheduler.h 00:05:28.179 TEST_HEADER include/spdk/scsi.h 00:05:28.179 TEST_HEADER include/spdk/scsi_spec.h 00:05:28.179 TEST_HEADER include/spdk/sock.h 00:05:28.179 LINK spdk_bdev 00:05:28.179 TEST_HEADER include/spdk/stdinc.h 00:05:28.179 TEST_HEADER include/spdk/string.h 00:05:28.179 TEST_HEADER include/spdk/thread.h 00:05:28.179 TEST_HEADER include/spdk/trace.h 00:05:28.179 TEST_HEADER include/spdk/trace_parser.h 00:05:28.179 TEST_HEADER include/spdk/tree.h 00:05:28.179 TEST_HEADER include/spdk/ublk.h 00:05:28.179 CC test/env/mem_callbacks/mem_callbacks.o 00:05:28.179 TEST_HEADER include/spdk/util.h 00:05:28.179 TEST_HEADER include/spdk/uuid.h 00:05:28.179 TEST_HEADER include/spdk/version.h 00:05:28.179 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:28.179 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:28.179 TEST_HEADER include/spdk/vhost.h 00:05:28.179 TEST_HEADER include/spdk/vmd.h 00:05:28.179 TEST_HEADER include/spdk/xor.h 00:05:28.179 TEST_HEADER include/spdk/zipf.h 00:05:28.179 CXX test/cpp_headers/accel.o 00:05:28.179 LINK hello_world 00:05:28.179 CC examples/nvme/reconnect/reconnect.o 00:05:28.179 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:28.179 LINK blobcli 00:05:28.179 CXX test/cpp_headers/accel_module.o 00:05:28.179 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:28.179 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:28.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:28.437 CC test/app/histogram_perf/histogram_perf.o 00:05:28.437 CC examples/bdev/hello_world/hello_bdev.o 00:05:28.437 CXX test/cpp_headers/assert.o 00:05:28.437 LINK histogram_perf 00:05:28.437 CC test/app/jsoncat/jsoncat.o 00:05:28.437 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:28.696 LINK reconnect 00:05:28.696 LINK nvme_fuzz 00:05:28.696 CXX test/cpp_headers/barrier.o 00:05:28.696 LINK hello_bdev 00:05:28.696 LINK jsoncat 00:05:28.696 CXX test/cpp_headers/base64.o 00:05:28.696 LINK mem_callbacks 00:05:28.696 CC examples/nvme/arbitration/arbitration.o 00:05:28.696 LINK vhost_fuzz 00:05:28.954 CC test/env/vtophys/vtophys.o 00:05:28.954 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:28.954 CC examples/nvme/hotplug/hotplug.o 00:05:28.954 CXX test/cpp_headers/bdev.o 00:05:28.954 CC test/env/memory/memory_ut.o 00:05:28.954 CC examples/bdev/bdevperf/bdevperf.o 00:05:28.954 LINK nvme_manage 00:05:28.954 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:29.211 LINK arbitration 00:05:29.211 LINK vtophys 00:05:29.211 LINK env_dpdk_post_init 00:05:29.211 CXX test/cpp_headers/bdev_module.o 00:05:29.211 CXX test/cpp_headers/bdev_zone.o 00:05:29.211 LINK cmb_copy 00:05:29.211 LINK hotplug 00:05:29.211 CXX test/cpp_headers/bit_array.o 00:05:29.211 CXX test/cpp_headers/bit_pool.o 00:05:29.469 CXX test/cpp_headers/blob_bdev.o 00:05:29.469 CC examples/nvme/abort/abort.o 00:05:29.469 CC test/env/pci/pci_ut.o 00:05:29.469 CXX test/cpp_headers/blobfs_bdev.o 00:05:29.469 CC test/rpc_client/rpc_client_test.o 00:05:29.727 CC test/event/event_perf/event_perf.o 00:05:29.727 CC test/nvme/aer/aer.o 00:05:29.727 CC test/lvol/esnap/esnap.o 00:05:29.727 CXX test/cpp_headers/blobfs.o 00:05:29.727 LINK rpc_client_test 00:05:29.727 LINK event_perf 00:05:29.985 LINK bdevperf 00:05:29.985 LINK abort 00:05:29.985 LINK pci_ut 00:05:29.985 CXX test/cpp_headers/blob.o 00:05:29.985 LINK aer 00:05:29.985 CC test/event/reactor/reactor.o 00:05:29.985 CC test/event/reactor_perf/reactor_perf.o 00:05:29.985 LINK iscsi_fuzz 00:05:30.243 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:30.243 CXX test/cpp_headers/conf.o 00:05:30.243 CC test/event/app_repeat/app_repeat.o 00:05:30.243 LINK reactor_perf 00:05:30.243 LINK reactor 00:05:30.243 CC test/nvme/reset/reset.o 00:05:30.243 LINK memory_ut 00:05:30.243 CC test/event/scheduler/scheduler.o 00:05:30.243 CXX test/cpp_headers/config.o 00:05:30.243 LINK app_repeat 00:05:30.243 CXX test/cpp_headers/cpuset.o 00:05:30.501 LINK pmr_persistence 00:05:30.501 CC test/app/stub/stub.o 00:05:30.501 CC test/nvme/sgl/sgl.o 00:05:30.501 LINK reset 00:05:30.501 CC test/nvme/e2edp/nvme_dp.o 00:05:30.501 CXX test/cpp_headers/crc16.o 00:05:30.501 LINK scheduler 00:05:30.501 CC test/accel/dif/dif.o 00:05:30.501 LINK stub 00:05:30.760 CC test/nvme/overhead/overhead.o 00:05:30.760 CXX test/cpp_headers/crc32.o 00:05:30.760 LINK sgl 00:05:30.760 CXX test/cpp_headers/crc64.o 00:05:30.760 CC test/nvme/err_injection/err_injection.o 00:05:30.760 CC examples/nvmf/nvmf/nvmf.o 00:05:30.760 CXX test/cpp_headers/dif.o 00:05:30.760 LINK nvme_dp 00:05:31.018 LINK overhead 00:05:31.018 CXX test/cpp_headers/dma.o 00:05:31.018 LINK err_injection 00:05:31.018 CC test/nvme/startup/startup.o 00:05:31.018 CC test/nvme/reserve/reserve.o 00:05:31.018 CC test/nvme/simple_copy/simple_copy.o 00:05:31.018 CC test/nvme/connect_stress/connect_stress.o 00:05:31.018 LINK nvmf 00:05:31.276 CXX test/cpp_headers/endian.o 00:05:31.276 LINK startup 00:05:31.276 CC test/nvme/boot_partition/boot_partition.o 00:05:31.276 CC test/nvme/compliance/nvme_compliance.o 00:05:31.276 LINK dif 00:05:31.276 LINK connect_stress 00:05:31.276 LINK reserve 00:05:31.276 LINK simple_copy 00:05:31.276 CXX test/cpp_headers/env_dpdk.o 00:05:31.535 LINK boot_partition 00:05:31.535 CC test/nvme/fused_ordering/fused_ordering.o 00:05:31.535 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:31.535 CXX test/cpp_headers/env.o 00:05:31.535 CXX test/cpp_headers/event.o 00:05:31.535 CC test/nvme/cuse/cuse.o 00:05:31.535 CC test/nvme/fdp/fdp.o 00:05:31.535 CXX test/cpp_headers/fd_group.o 00:05:31.535 LINK nvme_compliance 00:05:31.535 LINK doorbell_aers 00:05:31.535 CXX test/cpp_headers/fd.o 00:05:31.535 LINK fused_ordering 00:05:31.794 CC test/bdev/bdevio/bdevio.o 00:05:31.794 CXX test/cpp_headers/file.o 00:05:31.794 CXX test/cpp_headers/fsdev.o 00:05:31.794 CXX test/cpp_headers/fsdev_module.o 00:05:31.794 CXX test/cpp_headers/ftl.o 00:05:31.794 CXX test/cpp_headers/fuse_dispatcher.o 00:05:31.794 CXX test/cpp_headers/gpt_spec.o 00:05:31.794 CXX test/cpp_headers/hexlify.o 00:05:31.794 LINK fdp 00:05:31.794 CXX test/cpp_headers/histogram_data.o 00:05:32.052 CXX test/cpp_headers/idxd.o 00:05:32.052 CXX test/cpp_headers/idxd_spec.o 00:05:32.052 CXX test/cpp_headers/init.o 00:05:32.052 CXX test/cpp_headers/ioat.o 00:05:32.052 CXX test/cpp_headers/ioat_spec.o 00:05:32.052 LINK bdevio 00:05:32.052 CXX test/cpp_headers/iscsi_spec.o 00:05:32.052 CXX test/cpp_headers/json.o 00:05:32.052 CXX test/cpp_headers/jsonrpc.o 00:05:32.310 CXX test/cpp_headers/keyring.o 00:05:32.310 CXX test/cpp_headers/keyring_module.o 00:05:32.310 CXX test/cpp_headers/likely.o 00:05:32.310 CXX test/cpp_headers/log.o 00:05:32.310 CXX test/cpp_headers/lvol.o 00:05:32.310 CXX test/cpp_headers/md5.o 00:05:32.310 CXX test/cpp_headers/memory.o 00:05:32.310 CXX test/cpp_headers/mmio.o 00:05:32.310 CXX test/cpp_headers/nbd.o 00:05:32.310 CXX test/cpp_headers/net.o 00:05:32.310 CXX test/cpp_headers/notify.o 00:05:32.310 CXX test/cpp_headers/nvme.o 00:05:32.310 CXX test/cpp_headers/nvme_intel.o 00:05:32.310 CXX test/cpp_headers/nvme_ocssd.o 00:05:32.566 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:32.566 CXX test/cpp_headers/nvme_spec.o 00:05:32.566 CXX test/cpp_headers/nvme_zns.o 00:05:32.566 CXX test/cpp_headers/nvmf_cmd.o 00:05:32.566 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:32.566 CXX test/cpp_headers/nvmf.o 00:05:32.566 CXX test/cpp_headers/nvmf_spec.o 00:05:32.566 CXX test/cpp_headers/nvmf_transport.o 00:05:32.566 CXX test/cpp_headers/opal.o 00:05:32.566 CXX test/cpp_headers/opal_spec.o 00:05:32.832 CXX test/cpp_headers/pci_ids.o 00:05:32.832 CXX test/cpp_headers/pipe.o 00:05:32.832 CXX test/cpp_headers/queue.o 00:05:32.832 CXX test/cpp_headers/reduce.o 00:05:32.832 CXX test/cpp_headers/rpc.o 00:05:32.832 CXX test/cpp_headers/scheduler.o 00:05:32.832 CXX test/cpp_headers/scsi.o 00:05:32.832 LINK cuse 00:05:32.832 CXX test/cpp_headers/scsi_spec.o 00:05:32.832 CXX test/cpp_headers/sock.o 00:05:32.832 CXX test/cpp_headers/stdinc.o 00:05:32.832 CXX test/cpp_headers/string.o 00:05:33.091 CXX test/cpp_headers/thread.o 00:05:33.091 CXX test/cpp_headers/trace.o 00:05:33.091 CXX test/cpp_headers/trace_parser.o 00:05:33.091 CXX test/cpp_headers/tree.o 00:05:33.091 CXX test/cpp_headers/ublk.o 00:05:33.091 CXX test/cpp_headers/util.o 00:05:33.091 CXX test/cpp_headers/uuid.o 00:05:33.091 CXX test/cpp_headers/version.o 00:05:33.091 CXX test/cpp_headers/vfio_user_pci.o 00:05:33.091 CXX test/cpp_headers/vfio_user_spec.o 00:05:33.091 CXX test/cpp_headers/vhost.o 00:05:33.091 CXX test/cpp_headers/vmd.o 00:05:33.091 CXX test/cpp_headers/xor.o 00:05:33.091 CXX test/cpp_headers/zipf.o 00:05:34.989 LINK esnap 00:05:34.989 00:05:34.989 real 1m31.715s 00:05:34.989 user 8m28.410s 00:05:34.989 sys 1m42.803s 00:05:34.989 13:39:45 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:34.989 13:39:45 make -- common/autotest_common.sh@10 -- $ set +x 00:05:34.989 ************************************ 00:05:34.989 END TEST make 00:05:34.989 ************************************ 00:05:35.247 13:39:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:35.247 13:39:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:35.247 13:39:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:35.247 13:39:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:35.247 13:39:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:35.247 13:39:45 -- pm/common@44 -- $ pid=5406 00:05:35.247 13:39:45 -- pm/common@50 -- $ kill -TERM 5406 00:05:35.247 13:39:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:35.247 13:39:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:35.247 13:39:45 -- pm/common@44 -- $ pid=5407 00:05:35.247 13:39:45 -- pm/common@50 -- $ kill -TERM 5407 00:05:35.247 13:39:45 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.247 13:39:45 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.247 13:39:45 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.247 13:39:45 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.247 13:39:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.247 13:39:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.247 13:39:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.247 13:39:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.247 13:39:45 -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.247 13:39:45 -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.247 13:39:45 -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.247 13:39:45 -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.247 13:39:45 -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.247 13:39:45 -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.247 13:39:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.247 13:39:45 -- scripts/common.sh@344 -- # case "$op" in 00:05:35.247 13:39:45 -- scripts/common.sh@345 -- # : 1 00:05:35.247 13:39:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.247 13:39:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.247 13:39:45 -- scripts/common.sh@365 -- # decimal 1 00:05:35.247 13:39:45 -- scripts/common.sh@353 -- # local d=1 00:05:35.247 13:39:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.247 13:39:45 -- scripts/common.sh@355 -- # echo 1 00:05:35.247 13:39:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.247 13:39:45 -- scripts/common.sh@366 -- # decimal 2 00:05:35.247 13:39:45 -- scripts/common.sh@353 -- # local d=2 00:05:35.247 13:39:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.247 13:39:45 -- scripts/common.sh@355 -- # echo 2 00:05:35.247 13:39:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.247 13:39:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.247 13:39:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.247 13:39:45 -- scripts/common.sh@368 -- # return 0 00:05:35.247 13:39:45 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.247 13:39:45 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.247 --rc genhtml_branch_coverage=1 00:05:35.247 --rc genhtml_function_coverage=1 00:05:35.247 --rc genhtml_legend=1 00:05:35.247 --rc geninfo_all_blocks=1 00:05:35.247 --rc geninfo_unexecuted_blocks=1 00:05:35.247 00:05:35.247 ' 00:05:35.247 13:39:45 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.247 --rc genhtml_branch_coverage=1 00:05:35.247 --rc genhtml_function_coverage=1 00:05:35.247 --rc genhtml_legend=1 00:05:35.247 --rc geninfo_all_blocks=1 00:05:35.247 --rc geninfo_unexecuted_blocks=1 00:05:35.247 00:05:35.247 ' 00:05:35.247 13:39:45 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.247 --rc genhtml_branch_coverage=1 00:05:35.247 --rc genhtml_function_coverage=1 00:05:35.247 --rc genhtml_legend=1 00:05:35.247 --rc geninfo_all_blocks=1 00:05:35.247 --rc geninfo_unexecuted_blocks=1 00:05:35.247 00:05:35.247 ' 00:05:35.247 13:39:45 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.247 --rc genhtml_branch_coverage=1 00:05:35.247 --rc genhtml_function_coverage=1 00:05:35.247 --rc genhtml_legend=1 00:05:35.247 --rc geninfo_all_blocks=1 00:05:35.247 --rc geninfo_unexecuted_blocks=1 00:05:35.248 00:05:35.248 ' 00:05:35.248 13:39:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.248 13:39:45 -- nvmf/common.sh@7 -- # uname -s 00:05:35.248 13:39:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.248 13:39:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.248 13:39:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.248 13:39:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.248 13:39:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.248 13:39:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.248 13:39:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.248 13:39:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.248 13:39:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.248 13:39:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.248 13:39:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:05:35.248 13:39:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:05:35.248 13:39:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.248 13:39:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.248 13:39:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:35.248 13:39:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.248 13:39:45 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.248 13:39:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.248 13:39:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.248 13:39:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.248 13:39:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.248 13:39:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.248 13:39:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.248 13:39:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.248 13:39:45 -- paths/export.sh@5 -- # export PATH 00:05:35.248 13:39:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.248 13:39:45 -- nvmf/common.sh@51 -- # : 0 00:05:35.248 13:39:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.248 13:39:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.248 13:39:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.248 13:39:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.248 13:39:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.248 13:39:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.248 13:39:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.248 13:39:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.248 13:39:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.248 13:39:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:35.248 13:39:45 -- spdk/autotest.sh@32 -- # uname -s 00:05:35.248 13:39:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:35.248 13:39:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:35.248 13:39:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:35.506 13:39:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:35.506 13:39:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:35.506 13:39:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:35.506 13:39:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:35.506 13:39:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:35.506 13:39:45 -- spdk/autotest.sh@48 -- # udevadm_pid=54507 00:05:35.506 13:39:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:35.506 13:39:45 -- pm/common@17 -- # local monitor 00:05:35.506 13:39:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:35.506 13:39:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:35.506 13:39:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:35.506 13:39:45 -- pm/common@25 -- # sleep 1 00:05:35.506 13:39:45 -- pm/common@21 -- # date +%s 00:05:35.506 13:39:45 -- pm/common@21 -- # date +%s 00:05:35.506 13:39:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727789985 00:05:35.506 13:39:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727789985 00:05:35.506 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727789985_collect-vmstat.pm.log 00:05:35.506 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727789985_collect-cpu-load.pm.log 00:05:36.473 13:39:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:36.473 13:39:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:36.473 13:39:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.473 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:36.473 13:39:46 -- spdk/autotest.sh@59 -- # create_test_list 00:05:36.473 13:39:46 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:36.473 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:36.473 13:39:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:36.473 13:39:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:36.473 13:39:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:36.473 13:39:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:36.473 13:39:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:36.473 13:39:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:36.473 13:39:46 -- common/autotest_common.sh@1455 -- # uname 00:05:36.473 13:39:46 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:36.473 13:39:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:36.473 13:39:46 -- common/autotest_common.sh@1475 -- # uname 00:05:36.473 13:39:46 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:36.473 13:39:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:36.473 13:39:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:36.731 lcov: LCOV version 1.15 00:05:36.731 13:39:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:54.810 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:54.810 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:12.892 13:40:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:12.892 13:40:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.892 13:40:20 -- common/autotest_common.sh@10 -- # set +x 00:06:12.892 13:40:20 -- spdk/autotest.sh@78 -- # rm -f 00:06:12.892 13:40:20 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:12.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.892 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:12.892 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:12.892 13:40:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:12.892 13:40:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:12.892 13:40:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:12.892 13:40:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:12.892 13:40:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.892 13:40:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:12.892 13:40:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:12.893 13:40:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:12.893 13:40:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.893 13:40:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.893 13:40:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:12.893 13:40:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:12.893 13:40:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:12.893 13:40:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.893 13:40:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.893 13:40:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:12.893 13:40:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:12.893 13:40:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:12.893 13:40:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.893 13:40:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.893 13:40:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:12.893 13:40:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:12.893 13:40:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:12.893 13:40:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.893 13:40:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:12.893 13:40:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.893 13:40:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.893 13:40:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:12.893 13:40:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:12.893 13:40:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:12.893 No valid GPT data, bailing 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # pt= 00:06:12.893 13:40:21 -- scripts/common.sh@395 -- # return 1 00:06:12.893 13:40:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:12.893 1+0 records in 00:06:12.893 1+0 records out 00:06:12.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423696 s, 247 MB/s 00:06:12.893 13:40:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.893 13:40:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.893 13:40:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:12.893 13:40:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:12.893 13:40:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:12.893 No valid GPT data, bailing 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # pt= 00:06:12.893 13:40:21 -- scripts/common.sh@395 -- # return 1 00:06:12.893 13:40:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:12.893 1+0 records in 00:06:12.893 1+0 records out 00:06:12.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431754 s, 243 MB/s 00:06:12.893 13:40:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.893 13:40:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.893 13:40:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:12.893 13:40:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:12.893 13:40:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:12.893 No valid GPT data, bailing 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # pt= 00:06:12.893 13:40:21 -- scripts/common.sh@395 -- # return 1 00:06:12.893 13:40:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:12.893 1+0 records in 00:06:12.893 1+0 records out 00:06:12.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468487 s, 224 MB/s 00:06:12.893 13:40:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.893 13:40:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.893 13:40:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:12.893 13:40:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:12.893 13:40:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:12.893 No valid GPT data, bailing 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:12.893 13:40:21 -- scripts/common.sh@394 -- # pt= 00:06:12.893 13:40:21 -- scripts/common.sh@395 -- # return 1 00:06:12.893 13:40:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:12.893 1+0 records in 00:06:12.893 1+0 records out 00:06:12.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389509 s, 269 MB/s 00:06:12.893 13:40:21 -- spdk/autotest.sh@105 -- # sync 00:06:12.893 13:40:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:12.893 13:40:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:12.893 13:40:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:13.828 13:40:23 -- spdk/autotest.sh@111 -- # uname -s 00:06:13.828 13:40:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:13.828 13:40:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:13.828 13:40:23 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.394 Hugepages 00:06:14.394 node hugesize free / total 00:06:14.394 node0 1048576kB 0 / 0 00:06:14.394 node0 2048kB 0 / 0 00:06:14.394 00:06:14.394 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.394 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:14.394 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:14.394 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:14.394 13:40:24 -- spdk/autotest.sh@117 -- # uname -s 00:06:14.394 13:40:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:14.394 13:40:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:14.394 13:40:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.329 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.329 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.329 13:40:25 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:16.263 13:40:26 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:16.263 13:40:26 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:16.263 13:40:26 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:16.263 13:40:26 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:16.263 13:40:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:16.263 13:40:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:16.263 13:40:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:16.263 13:40:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:16.263 13:40:26 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:16.522 13:40:26 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:16.522 13:40:26 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:16.522 13:40:26 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:16.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.780 Waiting for block devices as requested 00:06:16.780 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.038 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.038 13:40:27 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:17.038 13:40:27 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:17.038 13:40:27 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:17.038 13:40:27 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:17.038 13:40:27 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:17.038 13:40:27 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1541 -- # continue 00:06:17.038 13:40:27 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:17.038 13:40:27 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.038 13:40:27 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:17.038 13:40:27 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:17.038 13:40:27 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:17.038 13:40:27 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:17.038 13:40:27 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:17.038 13:40:27 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:17.038 13:40:27 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:17.038 13:40:27 -- common/autotest_common.sh@1541 -- # continue 00:06:17.038 13:40:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:17.038 13:40:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.038 13:40:27 -- common/autotest_common.sh@10 -- # set +x 00:06:17.038 13:40:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:17.038 13:40:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.038 13:40:27 -- common/autotest_common.sh@10 -- # set +x 00:06:17.038 13:40:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:17.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.974 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.974 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.974 13:40:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:17.974 13:40:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.974 13:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:17.974 13:40:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:17.974 13:40:28 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:17.974 13:40:28 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:17.974 13:40:28 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:17.974 13:40:28 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:17.974 13:40:28 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:17.974 13:40:28 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:17.974 13:40:28 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:17.974 13:40:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:17.974 13:40:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:17.974 13:40:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.974 13:40:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.974 13:40:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:17.974 13:40:28 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:17.974 13:40:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:17.974 13:40:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:17.974 13:40:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:17.974 13:40:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:17.974 13:40:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.974 13:40:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:17.974 13:40:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:17.974 13:40:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:17.974 13:40:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.974 13:40:28 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:17.974 13:40:28 -- common/autotest_common.sh@1570 -- # return 0 00:06:17.974 13:40:28 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:17.974 13:40:28 -- common/autotest_common.sh@1578 -- # return 0 00:06:17.974 13:40:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:17.974 13:40:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:17.974 13:40:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.974 13:40:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.974 13:40:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:17.974 13:40:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.974 13:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:17.974 13:40:28 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:17.974 13:40:28 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:17.974 13:40:28 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:17.974 13:40:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:17.974 13:40:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.974 13:40:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.974 13:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:17.974 ************************************ 00:06:17.974 START TEST env 00:06:17.974 ************************************ 00:06:17.974 13:40:28 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:18.233 * Looking for test storage... 00:06:18.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.233 13:40:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.233 13:40:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.233 13:40:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.233 13:40:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.233 13:40:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.233 13:40:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.233 13:40:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.233 13:40:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.233 13:40:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.233 13:40:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.233 13:40:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.233 13:40:28 env -- scripts/common.sh@344 -- # case "$op" in 00:06:18.233 13:40:28 env -- scripts/common.sh@345 -- # : 1 00:06:18.233 13:40:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.233 13:40:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.233 13:40:28 env -- scripts/common.sh@365 -- # decimal 1 00:06:18.233 13:40:28 env -- scripts/common.sh@353 -- # local d=1 00:06:18.233 13:40:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.233 13:40:28 env -- scripts/common.sh@355 -- # echo 1 00:06:18.233 13:40:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.233 13:40:28 env -- scripts/common.sh@366 -- # decimal 2 00:06:18.233 13:40:28 env -- scripts/common.sh@353 -- # local d=2 00:06:18.233 13:40:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.233 13:40:28 env -- scripts/common.sh@355 -- # echo 2 00:06:18.233 13:40:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.233 13:40:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.233 13:40:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.233 13:40:28 env -- scripts/common.sh@368 -- # return 0 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.233 --rc genhtml_branch_coverage=1 00:06:18.233 --rc genhtml_function_coverage=1 00:06:18.233 --rc genhtml_legend=1 00:06:18.233 --rc geninfo_all_blocks=1 00:06:18.233 --rc geninfo_unexecuted_blocks=1 00:06:18.233 00:06:18.233 ' 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.233 --rc genhtml_branch_coverage=1 00:06:18.233 --rc genhtml_function_coverage=1 00:06:18.233 --rc genhtml_legend=1 00:06:18.233 --rc geninfo_all_blocks=1 00:06:18.233 --rc geninfo_unexecuted_blocks=1 00:06:18.233 00:06:18.233 ' 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.233 --rc genhtml_branch_coverage=1 00:06:18.233 --rc genhtml_function_coverage=1 00:06:18.233 --rc genhtml_legend=1 00:06:18.233 --rc geninfo_all_blocks=1 00:06:18.233 --rc geninfo_unexecuted_blocks=1 00:06:18.233 00:06:18.233 ' 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.233 --rc genhtml_branch_coverage=1 00:06:18.233 --rc genhtml_function_coverage=1 00:06:18.233 --rc genhtml_legend=1 00:06:18.233 --rc geninfo_all_blocks=1 00:06:18.233 --rc geninfo_unexecuted_blocks=1 00:06:18.233 00:06:18.233 ' 00:06:18.233 13:40:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.233 13:40:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.233 13:40:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.233 ************************************ 00:06:18.233 START TEST env_memory 00:06:18.233 ************************************ 00:06:18.233 13:40:28 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.233 00:06:18.233 00:06:18.233 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.233 http://cunit.sourceforge.net/ 00:06:18.233 00:06:18.233 00:06:18.233 Suite: memory 00:06:18.233 Test: alloc and free memory map ...[2024-10-01 13:40:28.388886] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:18.233 passed 00:06:18.492 Test: mem map translation ...[2024-10-01 13:40:28.419892] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:18.492 [2024-10-01 13:40:28.419943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:18.492 [2024-10-01 13:40:28.420000] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:18.492 [2024-10-01 13:40:28.420012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:18.492 passed 00:06:18.492 Test: mem map registration ...[2024-10-01 13:40:28.483966] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:18.492 [2024-10-01 13:40:28.484035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:18.492 passed 00:06:18.492 Test: mem map adjacent registrations ...passed 00:06:18.492 00:06:18.492 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.492 suites 1 1 n/a 0 0 00:06:18.492 tests 4 4 4 0 0 00:06:18.492 asserts 152 152 152 0 n/a 00:06:18.492 00:06:18.492 Elapsed time = 0.214 seconds 00:06:18.492 00:06:18.492 real 0m0.231s 00:06:18.492 user 0m0.215s 00:06:18.492 sys 0m0.013s 00:06:18.492 13:40:28 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.492 13:40:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:18.492 ************************************ 00:06:18.492 END TEST env_memory 00:06:18.492 ************************************ 00:06:18.492 13:40:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:18.492 13:40:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.492 13:40:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.492 13:40:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.492 ************************************ 00:06:18.492 START TEST env_vtophys 00:06:18.492 ************************************ 00:06:18.492 13:40:28 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:18.492 EAL: lib.eal log level changed from notice to debug 00:06:18.492 EAL: Detected lcore 0 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 1 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 2 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 3 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 4 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 5 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 6 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 7 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 8 as core 0 on socket 0 00:06:18.492 EAL: Detected lcore 9 as core 0 on socket 0 00:06:18.492 EAL: Maximum logical cores by configuration: 128 00:06:18.492 EAL: Detected CPU lcores: 10 00:06:18.492 EAL: Detected NUMA nodes: 1 00:06:18.492 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:18.492 EAL: Detected shared linkage of DPDK 00:06:18.492 EAL: No shared files mode enabled, IPC will be disabled 00:06:18.492 EAL: Selected IOVA mode 'PA' 00:06:18.492 EAL: Probing VFIO support... 00:06:18.492 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:18.492 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:18.492 EAL: Ask a virtual area of 0x2e000 bytes 00:06:18.492 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:18.492 EAL: Setting up physically contiguous memory... 00:06:18.492 EAL: Setting maximum number of open files to 524288 00:06:18.492 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:18.492 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:18.492 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.492 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:18.492 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.492 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.492 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:18.492 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:18.492 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.492 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:18.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.493 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.493 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:18.493 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:18.493 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.493 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:18.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.493 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.493 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:18.493 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:18.493 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.493 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:18.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.493 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.493 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:18.493 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:18.493 EAL: Hugepages will be freed exactly as allocated. 00:06:18.493 EAL: No shared files mode enabled, IPC is disabled 00:06:18.493 EAL: No shared files mode enabled, IPC is disabled 00:06:18.750 EAL: TSC frequency is ~2200000 KHz 00:06:18.750 EAL: Main lcore 0 is ready (tid=7f1fac91aa00;cpuset=[0]) 00:06:18.750 EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 0 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 2MB 00:06:18.751 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:18.751 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:18.751 EAL: Mem event callback 'spdk:(nil)' registered 00:06:18.751 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:18.751 00:06:18.751 00:06:18.751 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.751 http://cunit.sourceforge.net/ 00:06:18.751 00:06:18.751 00:06:18.751 Suite: components_suite 00:06:18.751 Test: vtophys_malloc_test ...passed 00:06:18.751 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 4 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 4MB 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was shrunk by 4MB 00:06:18.751 EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 4 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 6MB 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was shrunk by 6MB 00:06:18.751 EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 4 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 10MB 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was shrunk by 10MB 00:06:18.751 EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 4 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 18MB 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was shrunk by 18MB 00:06:18.751 EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 4 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 34MB 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was shrunk by 34MB 00:06:18.751 EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 4 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 66MB 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was shrunk by 66MB 00:06:18.751 EAL: Trying to obtain current memory policy. 00:06:18.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.751 EAL: Restoring previous memory policy: 4 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.751 EAL: request: mp_malloc_sync 00:06:18.751 EAL: No shared files mode enabled, IPC is disabled 00:06:18.751 EAL: Heap on socket 0 was expanded by 130MB 00:06:18.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.009 EAL: request: mp_malloc_sync 00:06:19.009 EAL: No shared files mode enabled, IPC is disabled 00:06:19.009 EAL: Heap on socket 0 was shrunk by 130MB 00:06:19.009 EAL: Trying to obtain current memory policy. 00:06:19.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.009 EAL: Restoring previous memory policy: 4 00:06:19.009 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.009 EAL: request: mp_malloc_sync 00:06:19.009 EAL: No shared files mode enabled, IPC is disabled 00:06:19.009 EAL: Heap on socket 0 was expanded by 258MB 00:06:19.009 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.009 EAL: request: mp_malloc_sync 00:06:19.009 EAL: No shared files mode enabled, IPC is disabled 00:06:19.009 EAL: Heap on socket 0 was shrunk by 258MB 00:06:19.009 EAL: Trying to obtain current memory policy. 00:06:19.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.266 EAL: Restoring previous memory policy: 4 00:06:19.266 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.266 EAL: request: mp_malloc_sync 00:06:19.266 EAL: No shared files mode enabled, IPC is disabled 00:06:19.266 EAL: Heap on socket 0 was expanded by 514MB 00:06:19.266 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.266 EAL: request: mp_malloc_sync 00:06:19.266 EAL: No shared files mode enabled, IPC is disabled 00:06:19.266 EAL: Heap on socket 0 was shrunk by 514MB 00:06:19.266 EAL: Trying to obtain current memory policy. 00:06:19.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.523 EAL: Restoring previous memory policy: 4 00:06:19.523 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.523 EAL: request: mp_malloc_sync 00:06:19.523 EAL: No shared files mode enabled, IPC is disabled 00:06:19.523 EAL: Heap on socket 0 was expanded by 1026MB 00:06:19.782 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.042 passed 00:06:20.042 00:06:20.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.042 suites 1 1 n/a 0 0 00:06:20.042 tests 2 2 2 0 0 00:06:20.042 asserts 5505 5505 5505 0 n/a 00:06:20.042 00:06:20.042 Elapsed time = 1.257 seconds 00:06:20.043 EAL: request: mp_malloc_sync 00:06:20.043 EAL: No shared files mode enabled, IPC is disabled 00:06:20.043 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:20.043 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.043 EAL: request: mp_malloc_sync 00:06:20.043 EAL: No shared files mode enabled, IPC is disabled 00:06:20.043 EAL: Heap on socket 0 was shrunk by 2MB 00:06:20.043 EAL: No shared files mode enabled, IPC is disabled 00:06:20.043 EAL: No shared files mode enabled, IPC is disabled 00:06:20.043 EAL: No shared files mode enabled, IPC is disabled 00:06:20.043 ************************************ 00:06:20.043 END TEST env_vtophys 00:06:20.043 ************************************ 00:06:20.043 00:06:20.043 real 0m1.460s 00:06:20.043 user 0m0.795s 00:06:20.043 sys 0m0.529s 00:06:20.043 13:40:30 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.043 13:40:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 13:40:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:20.043 13:40:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.043 13:40:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.043 13:40:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 ************************************ 00:06:20.043 START TEST env_pci 00:06:20.043 ************************************ 00:06:20.043 13:40:30 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:20.043 00:06:20.043 00:06:20.043 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.043 http://cunit.sourceforge.net/ 00:06:20.043 00:06:20.043 00:06:20.043 Suite: pci 00:06:20.043 Test: pci_hook ...[2024-10-01 13:40:30.152328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56764 has claimed it 00:06:20.043 passed 00:06:20.043 00:06:20.043 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.043 suites 1 1 n/a 0 0 00:06:20.043 tests 1 1 1 0 0 00:06:20.043 asserts 25 25 25 0 n/a 00:06:20.043 00:06:20.043 Elapsed time = 0.002 seconds 00:06:20.043 EAL: Cannot find device (10000:00:01.0) 00:06:20.043 EAL: Failed to attach device on primary process 00:06:20.043 ************************************ 00:06:20.043 END TEST env_pci 00:06:20.043 ************************************ 00:06:20.043 00:06:20.043 real 0m0.023s 00:06:20.043 user 0m0.010s 00:06:20.043 sys 0m0.012s 00:06:20.043 13:40:30 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.043 13:40:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 13:40:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:20.043 13:40:30 env -- env/env.sh@15 -- # uname 00:06:20.043 13:40:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:20.043 13:40:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:20.043 13:40:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.043 13:40:30 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:20.043 13:40:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.043 13:40:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 ************************************ 00:06:20.043 START TEST env_dpdk_post_init 00:06:20.043 ************************************ 00:06:20.043 13:40:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.301 EAL: Detected CPU lcores: 10 00:06:20.301 EAL: Detected NUMA nodes: 1 00:06:20.301 EAL: Detected shared linkage of DPDK 00:06:20.301 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.301 EAL: Selected IOVA mode 'PA' 00:06:20.301 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:20.301 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:20.301 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:20.301 Starting DPDK initialization... 00:06:20.301 Starting SPDK post initialization... 00:06:20.301 SPDK NVMe probe 00:06:20.301 Attaching to 0000:00:10.0 00:06:20.301 Attaching to 0000:00:11.0 00:06:20.301 Attached to 0000:00:10.0 00:06:20.301 Attached to 0000:00:11.0 00:06:20.301 Cleaning up... 00:06:20.301 ************************************ 00:06:20.301 END TEST env_dpdk_post_init 00:06:20.301 ************************************ 00:06:20.301 00:06:20.301 real 0m0.185s 00:06:20.301 user 0m0.050s 00:06:20.301 sys 0m0.035s 00:06:20.301 13:40:30 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.301 13:40:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.301 13:40:30 env -- env/env.sh@26 -- # uname 00:06:20.301 13:40:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:20.301 13:40:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.301 13:40:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.301 13:40:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.301 13:40:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.301 ************************************ 00:06:20.301 START TEST env_mem_callbacks 00:06:20.301 ************************************ 00:06:20.301 13:40:30 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.301 EAL: Detected CPU lcores: 10 00:06:20.301 EAL: Detected NUMA nodes: 1 00:06:20.301 EAL: Detected shared linkage of DPDK 00:06:20.301 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.559 EAL: Selected IOVA mode 'PA' 00:06:20.559 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:20.559 00:06:20.559 00:06:20.559 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.559 http://cunit.sourceforge.net/ 00:06:20.559 00:06:20.559 00:06:20.559 Suite: memory 00:06:20.559 Test: test ... 00:06:20.559 register 0x200000200000 2097152 00:06:20.559 malloc 3145728 00:06:20.559 register 0x200000400000 4194304 00:06:20.559 buf 0x200000500000 len 3145728 PASSED 00:06:20.559 malloc 64 00:06:20.559 buf 0x2000004fff40 len 64 PASSED 00:06:20.559 malloc 4194304 00:06:20.559 register 0x200000800000 6291456 00:06:20.559 buf 0x200000a00000 len 4194304 PASSED 00:06:20.559 free 0x200000500000 3145728 00:06:20.559 free 0x2000004fff40 64 00:06:20.559 unregister 0x200000400000 4194304 PASSED 00:06:20.559 free 0x200000a00000 4194304 00:06:20.559 unregister 0x200000800000 6291456 PASSED 00:06:20.559 malloc 8388608 00:06:20.559 register 0x200000400000 10485760 00:06:20.559 buf 0x200000600000 len 8388608 PASSED 00:06:20.559 free 0x200000600000 8388608 00:06:20.559 unregister 0x200000400000 10485760 PASSED 00:06:20.559 passed 00:06:20.559 00:06:20.559 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.559 suites 1 1 n/a 0 0 00:06:20.559 tests 1 1 1 0 0 00:06:20.559 asserts 15 15 15 0 n/a 00:06:20.559 00:06:20.559 Elapsed time = 0.009 seconds 00:06:20.559 00:06:20.559 real 0m0.148s 00:06:20.559 user 0m0.017s 00:06:20.559 sys 0m0.028s 00:06:20.559 13:40:30 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.559 13:40:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 ************************************ 00:06:20.559 END TEST env_mem_callbacks 00:06:20.559 ************************************ 00:06:20.559 ************************************ 00:06:20.559 END TEST env 00:06:20.559 ************************************ 00:06:20.559 00:06:20.559 real 0m2.503s 00:06:20.559 user 0m1.288s 00:06:20.559 sys 0m0.860s 00:06:20.559 13:40:30 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.559 13:40:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 13:40:30 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:20.559 13:40:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.559 13:40:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.559 13:40:30 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 ************************************ 00:06:20.559 START TEST rpc 00:06:20.559 ************************************ 00:06:20.559 13:40:30 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:20.818 * Looking for test storage... 00:06:20.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.818 13:40:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.818 13:40:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.818 13:40:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.818 13:40:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.818 13:40:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.818 13:40:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.818 13:40:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.818 13:40:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.818 13:40:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.818 13:40:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.818 13:40:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.818 13:40:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.818 13:40:30 rpc -- scripts/common.sh@345 -- # : 1 00:06:20.818 13:40:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.818 13:40:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.818 13:40:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.818 13:40:30 rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.818 13:40:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.818 13:40:30 rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.818 13:40:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.818 13:40:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.818 13:40:30 rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.818 13:40:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.818 13:40:30 rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.818 13:40:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.818 13:40:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.818 13:40:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.818 13:40:30 rpc -- scripts/common.sh@368 -- # return 0 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.818 --rc genhtml_branch_coverage=1 00:06:20.818 --rc genhtml_function_coverage=1 00:06:20.818 --rc genhtml_legend=1 00:06:20.818 --rc geninfo_all_blocks=1 00:06:20.818 --rc geninfo_unexecuted_blocks=1 00:06:20.818 00:06:20.818 ' 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.818 --rc genhtml_branch_coverage=1 00:06:20.818 --rc genhtml_function_coverage=1 00:06:20.818 --rc genhtml_legend=1 00:06:20.818 --rc geninfo_all_blocks=1 00:06:20.818 --rc geninfo_unexecuted_blocks=1 00:06:20.818 00:06:20.818 ' 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.818 --rc genhtml_branch_coverage=1 00:06:20.818 --rc genhtml_function_coverage=1 00:06:20.818 --rc genhtml_legend=1 00:06:20.818 --rc geninfo_all_blocks=1 00:06:20.818 --rc geninfo_unexecuted_blocks=1 00:06:20.818 00:06:20.818 ' 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.818 --rc genhtml_branch_coverage=1 00:06:20.818 --rc genhtml_function_coverage=1 00:06:20.818 --rc genhtml_legend=1 00:06:20.818 --rc geninfo_all_blocks=1 00:06:20.818 --rc geninfo_unexecuted_blocks=1 00:06:20.818 00:06:20.818 ' 00:06:20.818 13:40:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56887 00:06:20.818 13:40:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.818 13:40:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:20.818 13:40:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56887 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@831 -- # '[' -z 56887 ']' 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.818 13:40:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.818 [2024-10-01 13:40:30.941999] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:20.818 [2024-10-01 13:40:30.942341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56887 ] 00:06:21.077 [2024-10-01 13:40:31.083120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.077 [2024-10-01 13:40:31.216476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:21.077 [2024-10-01 13:40:31.216789] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56887' to capture a snapshot of events at runtime. 00:06:21.077 [2024-10-01 13:40:31.216982] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.077 [2024-10-01 13:40:31.217135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.077 [2024-10-01 13:40:31.217189] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56887 for offline analysis/debug. 00:06:21.077 [2024-10-01 13:40:31.217435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.390 [2024-10-01 13:40:31.298119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.964 13:40:31 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.964 13:40:31 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.964 13:40:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.964 13:40:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.964 13:40:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:21.964 13:40:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:21.964 13:40:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.964 13:40:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.964 13:40:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 ************************************ 00:06:21.964 START TEST rpc_integrity 00:06:21.964 ************************************ 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.964 { 00:06:21.964 "name": "Malloc0", 00:06:21.964 "aliases": [ 00:06:21.964 "21dafcfe-ac29-467d-9d7d-f42dc9e0e624" 00:06:21.964 ], 00:06:21.964 "product_name": "Malloc disk", 00:06:21.964 "block_size": 512, 00:06:21.964 "num_blocks": 16384, 00:06:21.964 "uuid": "21dafcfe-ac29-467d-9d7d-f42dc9e0e624", 00:06:21.964 "assigned_rate_limits": { 00:06:21.964 "rw_ios_per_sec": 0, 00:06:21.964 "rw_mbytes_per_sec": 0, 00:06:21.964 "r_mbytes_per_sec": 0, 00:06:21.964 "w_mbytes_per_sec": 0 00:06:21.964 }, 00:06:21.964 "claimed": false, 00:06:21.964 "zoned": false, 00:06:21.964 "supported_io_types": { 00:06:21.964 "read": true, 00:06:21.964 "write": true, 00:06:21.964 "unmap": true, 00:06:21.964 "flush": true, 00:06:21.964 "reset": true, 00:06:21.964 "nvme_admin": false, 00:06:21.964 "nvme_io": false, 00:06:21.964 "nvme_io_md": false, 00:06:21.964 "write_zeroes": true, 00:06:21.964 "zcopy": true, 00:06:21.964 "get_zone_info": false, 00:06:21.964 "zone_management": false, 00:06:21.964 "zone_append": false, 00:06:21.964 "compare": false, 00:06:21.964 "compare_and_write": false, 00:06:21.964 "abort": true, 00:06:21.964 "seek_hole": false, 00:06:21.964 "seek_data": false, 00:06:21.964 "copy": true, 00:06:21.964 "nvme_iov_md": false 00:06:21.964 }, 00:06:21.964 "memory_domains": [ 00:06:21.964 { 00:06:21.964 "dma_device_id": "system", 00:06:21.964 "dma_device_type": 1 00:06:21.964 }, 00:06:21.964 { 00:06:21.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.964 "dma_device_type": 2 00:06:21.964 } 00:06:21.964 ], 00:06:21.964 "driver_specific": {} 00:06:21.964 } 00:06:21.964 ]' 00:06:21.964 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 [2024-10-01 13:40:32.150427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:22.223 [2024-10-01 13:40:32.150518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.223 [2024-10-01 13:40:32.150566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fac120 00:06:22.223 [2024-10-01 13:40:32.150587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.223 [2024-10-01 13:40:32.152622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.223 [2024-10-01 13:40:32.152666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:22.223 Passthru0 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:22.223 { 00:06:22.223 "name": "Malloc0", 00:06:22.223 "aliases": [ 00:06:22.223 "21dafcfe-ac29-467d-9d7d-f42dc9e0e624" 00:06:22.223 ], 00:06:22.223 "product_name": "Malloc disk", 00:06:22.223 "block_size": 512, 00:06:22.223 "num_blocks": 16384, 00:06:22.223 "uuid": "21dafcfe-ac29-467d-9d7d-f42dc9e0e624", 00:06:22.223 "assigned_rate_limits": { 00:06:22.223 "rw_ios_per_sec": 0, 00:06:22.223 "rw_mbytes_per_sec": 0, 00:06:22.223 "r_mbytes_per_sec": 0, 00:06:22.223 "w_mbytes_per_sec": 0 00:06:22.223 }, 00:06:22.223 "claimed": true, 00:06:22.223 "claim_type": "exclusive_write", 00:06:22.223 "zoned": false, 00:06:22.223 "supported_io_types": { 00:06:22.223 "read": true, 00:06:22.223 "write": true, 00:06:22.223 "unmap": true, 00:06:22.223 "flush": true, 00:06:22.223 "reset": true, 00:06:22.223 "nvme_admin": false, 00:06:22.223 "nvme_io": false, 00:06:22.223 "nvme_io_md": false, 00:06:22.223 "write_zeroes": true, 00:06:22.223 "zcopy": true, 00:06:22.223 "get_zone_info": false, 00:06:22.223 "zone_management": false, 00:06:22.223 "zone_append": false, 00:06:22.223 "compare": false, 00:06:22.223 "compare_and_write": false, 00:06:22.223 "abort": true, 00:06:22.223 "seek_hole": false, 00:06:22.223 "seek_data": false, 00:06:22.223 "copy": true, 00:06:22.223 "nvme_iov_md": false 00:06:22.223 }, 00:06:22.223 "memory_domains": [ 00:06:22.223 { 00:06:22.223 "dma_device_id": "system", 00:06:22.223 "dma_device_type": 1 00:06:22.223 }, 00:06:22.223 { 00:06:22.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.223 "dma_device_type": 2 00:06:22.223 } 00:06:22.223 ], 00:06:22.223 "driver_specific": {} 00:06:22.223 }, 00:06:22.223 { 00:06:22.223 "name": "Passthru0", 00:06:22.223 "aliases": [ 00:06:22.223 "2749f3ab-d604-5848-8ba1-a66656a748f0" 00:06:22.223 ], 00:06:22.223 "product_name": "passthru", 00:06:22.223 "block_size": 512, 00:06:22.223 "num_blocks": 16384, 00:06:22.223 "uuid": "2749f3ab-d604-5848-8ba1-a66656a748f0", 00:06:22.223 "assigned_rate_limits": { 00:06:22.223 "rw_ios_per_sec": 0, 00:06:22.223 "rw_mbytes_per_sec": 0, 00:06:22.223 "r_mbytes_per_sec": 0, 00:06:22.223 "w_mbytes_per_sec": 0 00:06:22.223 }, 00:06:22.223 "claimed": false, 00:06:22.223 "zoned": false, 00:06:22.223 "supported_io_types": { 00:06:22.223 "read": true, 00:06:22.223 "write": true, 00:06:22.223 "unmap": true, 00:06:22.223 "flush": true, 00:06:22.223 "reset": true, 00:06:22.223 "nvme_admin": false, 00:06:22.223 "nvme_io": false, 00:06:22.223 "nvme_io_md": false, 00:06:22.223 "write_zeroes": true, 00:06:22.223 "zcopy": true, 00:06:22.223 "get_zone_info": false, 00:06:22.223 "zone_management": false, 00:06:22.223 "zone_append": false, 00:06:22.223 "compare": false, 00:06:22.223 "compare_and_write": false, 00:06:22.223 "abort": true, 00:06:22.223 "seek_hole": false, 00:06:22.223 "seek_data": false, 00:06:22.223 "copy": true, 00:06:22.223 "nvme_iov_md": false 00:06:22.223 }, 00:06:22.223 "memory_domains": [ 00:06:22.223 { 00:06:22.223 "dma_device_id": "system", 00:06:22.223 "dma_device_type": 1 00:06:22.223 }, 00:06:22.223 { 00:06:22.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.223 "dma_device_type": 2 00:06:22.223 } 00:06:22.223 ], 00:06:22.223 "driver_specific": { 00:06:22.223 "passthru": { 00:06:22.223 "name": "Passthru0", 00:06:22.223 "base_bdev_name": "Malloc0" 00:06:22.223 } 00:06:22.223 } 00:06:22.223 } 00:06:22.223 ]' 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:22.223 ************************************ 00:06:22.223 END TEST rpc_integrity 00:06:22.223 ************************************ 00:06:22.223 13:40:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:22.223 00:06:22.223 real 0m0.310s 00:06:22.223 user 0m0.200s 00:06:22.223 sys 0m0.040s 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 13:40:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:22.223 13:40:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.223 13:40:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.223 13:40:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 ************************************ 00:06:22.223 START TEST rpc_plugins 00:06:22.223 ************************************ 00:06:22.223 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:22.223 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:22.223 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.223 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:22.223 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:22.223 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.223 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.223 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.223 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:22.223 { 00:06:22.223 "name": "Malloc1", 00:06:22.223 "aliases": [ 00:06:22.223 "389440e4-f2b4-4f75-8a5f-5b1a980299e4" 00:06:22.223 ], 00:06:22.223 "product_name": "Malloc disk", 00:06:22.223 "block_size": 4096, 00:06:22.223 "num_blocks": 256, 00:06:22.223 "uuid": "389440e4-f2b4-4f75-8a5f-5b1a980299e4", 00:06:22.223 "assigned_rate_limits": { 00:06:22.223 "rw_ios_per_sec": 0, 00:06:22.223 "rw_mbytes_per_sec": 0, 00:06:22.223 "r_mbytes_per_sec": 0, 00:06:22.223 "w_mbytes_per_sec": 0 00:06:22.223 }, 00:06:22.223 "claimed": false, 00:06:22.223 "zoned": false, 00:06:22.224 "supported_io_types": { 00:06:22.224 "read": true, 00:06:22.224 "write": true, 00:06:22.224 "unmap": true, 00:06:22.224 "flush": true, 00:06:22.224 "reset": true, 00:06:22.224 "nvme_admin": false, 00:06:22.224 "nvme_io": false, 00:06:22.224 "nvme_io_md": false, 00:06:22.224 "write_zeroes": true, 00:06:22.224 "zcopy": true, 00:06:22.224 "get_zone_info": false, 00:06:22.224 "zone_management": false, 00:06:22.224 "zone_append": false, 00:06:22.224 "compare": false, 00:06:22.224 "compare_and_write": false, 00:06:22.224 "abort": true, 00:06:22.224 "seek_hole": false, 00:06:22.224 "seek_data": false, 00:06:22.224 "copy": true, 00:06:22.224 "nvme_iov_md": false 00:06:22.224 }, 00:06:22.224 "memory_domains": [ 00:06:22.224 { 00:06:22.224 "dma_device_id": "system", 00:06:22.224 "dma_device_type": 1 00:06:22.224 }, 00:06:22.224 { 00:06:22.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.224 "dma_device_type": 2 00:06:22.224 } 00:06:22.224 ], 00:06:22.224 "driver_specific": {} 00:06:22.224 } 00:06:22.224 ]' 00:06:22.224 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:22.482 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:22.482 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.482 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.482 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:22.482 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:22.482 ************************************ 00:06:22.482 END TEST rpc_plugins 00:06:22.482 ************************************ 00:06:22.482 13:40:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:22.482 00:06:22.482 real 0m0.161s 00:06:22.482 user 0m0.115s 00:06:22.482 sys 0m0.013s 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.482 13:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.482 13:40:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:22.482 13:40:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.482 13:40:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.482 13:40:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.482 ************************************ 00:06:22.482 START TEST rpc_trace_cmd_test 00:06:22.482 ************************************ 00:06:22.482 13:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:22.482 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:22.482 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:22.482 13:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.482 13:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.482 13:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.482 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:22.482 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56887", 00:06:22.482 "tpoint_group_mask": "0x8", 00:06:22.482 "iscsi_conn": { 00:06:22.482 "mask": "0x2", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.482 "scsi": { 00:06:22.482 "mask": "0x4", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.482 "bdev": { 00:06:22.482 "mask": "0x8", 00:06:22.482 "tpoint_mask": "0xffffffffffffffff" 00:06:22.482 }, 00:06:22.482 "nvmf_rdma": { 00:06:22.482 "mask": "0x10", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.482 "nvmf_tcp": { 00:06:22.482 "mask": "0x20", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.482 "ftl": { 00:06:22.482 "mask": "0x40", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.482 "blobfs": { 00:06:22.482 "mask": "0x80", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.482 "dsa": { 00:06:22.482 "mask": "0x200", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.482 "thread": { 00:06:22.482 "mask": "0x400", 00:06:22.482 "tpoint_mask": "0x0" 00:06:22.482 }, 00:06:22.483 "nvme_pcie": { 00:06:22.483 "mask": "0x800", 00:06:22.483 "tpoint_mask": "0x0" 00:06:22.483 }, 00:06:22.483 "iaa": { 00:06:22.483 "mask": "0x1000", 00:06:22.483 "tpoint_mask": "0x0" 00:06:22.483 }, 00:06:22.483 "nvme_tcp": { 00:06:22.483 "mask": "0x2000", 00:06:22.483 "tpoint_mask": "0x0" 00:06:22.483 }, 00:06:22.483 "bdev_nvme": { 00:06:22.483 "mask": "0x4000", 00:06:22.483 "tpoint_mask": "0x0" 00:06:22.483 }, 00:06:22.483 "sock": { 00:06:22.483 "mask": "0x8000", 00:06:22.483 "tpoint_mask": "0x0" 00:06:22.483 }, 00:06:22.483 "blob": { 00:06:22.483 "mask": "0x10000", 00:06:22.483 "tpoint_mask": "0x0" 00:06:22.483 }, 00:06:22.483 "bdev_raid": { 00:06:22.483 "mask": "0x20000", 00:06:22.483 "tpoint_mask": "0x0" 00:06:22.483 } 00:06:22.483 }' 00:06:22.483 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:22.483 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:22.483 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:22.740 ************************************ 00:06:22.740 END TEST rpc_trace_cmd_test 00:06:22.740 ************************************ 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:22.740 00:06:22.740 real 0m0.266s 00:06:22.740 user 0m0.228s 00:06:22.740 sys 0m0.028s 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.740 13:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.740 13:40:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:22.740 13:40:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:22.740 13:40:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:22.740 13:40:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.740 13:40:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.740 13:40:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.740 ************************************ 00:06:22.740 START TEST rpc_daemon_integrity 00:06:22.740 ************************************ 00:06:22.740 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:22.740 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:22.740 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.740 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.740 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.740 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:22.740 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.997 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:22.997 { 00:06:22.997 "name": "Malloc2", 00:06:22.997 "aliases": [ 00:06:22.997 "97ab2ff6-8a85-4e1f-a7c2-9898025110d3" 00:06:22.997 ], 00:06:22.997 "product_name": "Malloc disk", 00:06:22.997 "block_size": 512, 00:06:22.997 "num_blocks": 16384, 00:06:22.997 "uuid": "97ab2ff6-8a85-4e1f-a7c2-9898025110d3", 00:06:22.997 "assigned_rate_limits": { 00:06:22.997 "rw_ios_per_sec": 0, 00:06:22.997 "rw_mbytes_per_sec": 0, 00:06:22.997 "r_mbytes_per_sec": 0, 00:06:22.997 "w_mbytes_per_sec": 0 00:06:22.997 }, 00:06:22.997 "claimed": false, 00:06:22.997 "zoned": false, 00:06:22.997 "supported_io_types": { 00:06:22.997 "read": true, 00:06:22.998 "write": true, 00:06:22.998 "unmap": true, 00:06:22.998 "flush": true, 00:06:22.998 "reset": true, 00:06:22.998 "nvme_admin": false, 00:06:22.998 "nvme_io": false, 00:06:22.998 "nvme_io_md": false, 00:06:22.998 "write_zeroes": true, 00:06:22.998 "zcopy": true, 00:06:22.998 "get_zone_info": false, 00:06:22.998 "zone_management": false, 00:06:22.998 "zone_append": false, 00:06:22.998 "compare": false, 00:06:22.998 "compare_and_write": false, 00:06:22.998 "abort": true, 00:06:22.998 "seek_hole": false, 00:06:22.998 "seek_data": false, 00:06:22.998 "copy": true, 00:06:22.998 "nvme_iov_md": false 00:06:22.998 }, 00:06:22.998 "memory_domains": [ 00:06:22.998 { 00:06:22.998 "dma_device_id": "system", 00:06:22.998 "dma_device_type": 1 00:06:22.998 }, 00:06:22.998 { 00:06:22.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.998 "dma_device_type": 2 00:06:22.998 } 00:06:22.998 ], 00:06:22.998 "driver_specific": {} 00:06:22.998 } 00:06:22.998 ]' 00:06:22.998 13:40:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.998 [2024-10-01 13:40:33.019094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:22.998 [2024-10-01 13:40:33.019155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.998 [2024-10-01 13:40:33.019176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb9a90 00:06:22.998 [2024-10-01 13:40:33.019186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.998 [2024-10-01 13:40:33.021283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.998 [2024-10-01 13:40:33.021320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:22.998 Passthru0 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:22.998 { 00:06:22.998 "name": "Malloc2", 00:06:22.998 "aliases": [ 00:06:22.998 "97ab2ff6-8a85-4e1f-a7c2-9898025110d3" 00:06:22.998 ], 00:06:22.998 "product_name": "Malloc disk", 00:06:22.998 "block_size": 512, 00:06:22.998 "num_blocks": 16384, 00:06:22.998 "uuid": "97ab2ff6-8a85-4e1f-a7c2-9898025110d3", 00:06:22.998 "assigned_rate_limits": { 00:06:22.998 "rw_ios_per_sec": 0, 00:06:22.998 "rw_mbytes_per_sec": 0, 00:06:22.998 "r_mbytes_per_sec": 0, 00:06:22.998 "w_mbytes_per_sec": 0 00:06:22.998 }, 00:06:22.998 "claimed": true, 00:06:22.998 "claim_type": "exclusive_write", 00:06:22.998 "zoned": false, 00:06:22.998 "supported_io_types": { 00:06:22.998 "read": true, 00:06:22.998 "write": true, 00:06:22.998 "unmap": true, 00:06:22.998 "flush": true, 00:06:22.998 "reset": true, 00:06:22.998 "nvme_admin": false, 00:06:22.998 "nvme_io": false, 00:06:22.998 "nvme_io_md": false, 00:06:22.998 "write_zeroes": true, 00:06:22.998 "zcopy": true, 00:06:22.998 "get_zone_info": false, 00:06:22.998 "zone_management": false, 00:06:22.998 "zone_append": false, 00:06:22.998 "compare": false, 00:06:22.998 "compare_and_write": false, 00:06:22.998 "abort": true, 00:06:22.998 "seek_hole": false, 00:06:22.998 "seek_data": false, 00:06:22.998 "copy": true, 00:06:22.998 "nvme_iov_md": false 00:06:22.998 }, 00:06:22.998 "memory_domains": [ 00:06:22.998 { 00:06:22.998 "dma_device_id": "system", 00:06:22.998 "dma_device_type": 1 00:06:22.998 }, 00:06:22.998 { 00:06:22.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.998 "dma_device_type": 2 00:06:22.998 } 00:06:22.998 ], 00:06:22.998 "driver_specific": {} 00:06:22.998 }, 00:06:22.998 { 00:06:22.998 "name": "Passthru0", 00:06:22.998 "aliases": [ 00:06:22.998 "dbf24aa3-be93-5f2b-9327-ee74654f9908" 00:06:22.998 ], 00:06:22.998 "product_name": "passthru", 00:06:22.998 "block_size": 512, 00:06:22.998 "num_blocks": 16384, 00:06:22.998 "uuid": "dbf24aa3-be93-5f2b-9327-ee74654f9908", 00:06:22.998 "assigned_rate_limits": { 00:06:22.998 "rw_ios_per_sec": 0, 00:06:22.998 "rw_mbytes_per_sec": 0, 00:06:22.998 "r_mbytes_per_sec": 0, 00:06:22.998 "w_mbytes_per_sec": 0 00:06:22.998 }, 00:06:22.998 "claimed": false, 00:06:22.998 "zoned": false, 00:06:22.998 "supported_io_types": { 00:06:22.998 "read": true, 00:06:22.998 "write": true, 00:06:22.998 "unmap": true, 00:06:22.998 "flush": true, 00:06:22.998 "reset": true, 00:06:22.998 "nvme_admin": false, 00:06:22.998 "nvme_io": false, 00:06:22.998 "nvme_io_md": false, 00:06:22.998 "write_zeroes": true, 00:06:22.998 "zcopy": true, 00:06:22.998 "get_zone_info": false, 00:06:22.998 "zone_management": false, 00:06:22.998 "zone_append": false, 00:06:22.998 "compare": false, 00:06:22.998 "compare_and_write": false, 00:06:22.998 "abort": true, 00:06:22.998 "seek_hole": false, 00:06:22.998 "seek_data": false, 00:06:22.998 "copy": true, 00:06:22.998 "nvme_iov_md": false 00:06:22.998 }, 00:06:22.998 "memory_domains": [ 00:06:22.998 { 00:06:22.998 "dma_device_id": "system", 00:06:22.998 "dma_device_type": 1 00:06:22.998 }, 00:06:22.998 { 00:06:22.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.998 "dma_device_type": 2 00:06:22.998 } 00:06:22.998 ], 00:06:22.998 "driver_specific": { 00:06:22.998 "passthru": { 00:06:22.998 "name": "Passthru0", 00:06:22.998 "base_bdev_name": "Malloc2" 00:06:22.998 } 00:06:22.998 } 00:06:22.998 } 00:06:22.998 ]' 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.998 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:23.256 ************************************ 00:06:23.256 END TEST rpc_daemon_integrity 00:06:23.256 ************************************ 00:06:23.256 13:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:23.256 00:06:23.256 real 0m0.302s 00:06:23.256 user 0m0.209s 00:06:23.256 sys 0m0.032s 00:06:23.256 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.256 13:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.256 13:40:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:23.256 13:40:33 rpc -- rpc/rpc.sh@84 -- # killprocess 56887 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@950 -- # '[' -z 56887 ']' 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@954 -- # kill -0 56887 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@955 -- # uname 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56887 00:06:23.256 killing process with pid 56887 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56887' 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@969 -- # kill 56887 00:06:23.256 13:40:33 rpc -- common/autotest_common.sh@974 -- # wait 56887 00:06:23.513 00:06:23.513 real 0m2.980s 00:06:23.513 user 0m3.839s 00:06:23.513 sys 0m0.706s 00:06:23.513 13:40:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.513 13:40:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.513 ************************************ 00:06:23.513 END TEST rpc 00:06:23.513 ************************************ 00:06:23.771 13:40:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:23.771 13:40:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.771 13:40:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.771 13:40:33 -- common/autotest_common.sh@10 -- # set +x 00:06:23.771 ************************************ 00:06:23.771 START TEST skip_rpc 00:06:23.771 ************************************ 00:06:23.771 13:40:33 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:23.771 * Looking for test storage... 00:06:23.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:23.771 13:40:33 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.771 13:40:33 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.771 13:40:33 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.771 13:40:33 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.771 13:40:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:23.771 13:40:33 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.772 13:40:33 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.772 --rc genhtml_branch_coverage=1 00:06:23.772 --rc genhtml_function_coverage=1 00:06:23.772 --rc genhtml_legend=1 00:06:23.772 --rc geninfo_all_blocks=1 00:06:23.772 --rc geninfo_unexecuted_blocks=1 00:06:23.772 00:06:23.772 ' 00:06:23.772 13:40:33 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.772 --rc genhtml_branch_coverage=1 00:06:23.772 --rc genhtml_function_coverage=1 00:06:23.772 --rc genhtml_legend=1 00:06:23.772 --rc geninfo_all_blocks=1 00:06:23.772 --rc geninfo_unexecuted_blocks=1 00:06:23.772 00:06:23.772 ' 00:06:23.772 13:40:33 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.772 --rc genhtml_branch_coverage=1 00:06:23.772 --rc genhtml_function_coverage=1 00:06:23.772 --rc genhtml_legend=1 00:06:23.772 --rc geninfo_all_blocks=1 00:06:23.772 --rc geninfo_unexecuted_blocks=1 00:06:23.772 00:06:23.772 ' 00:06:23.772 13:40:33 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.772 --rc genhtml_branch_coverage=1 00:06:23.772 --rc genhtml_function_coverage=1 00:06:23.772 --rc genhtml_legend=1 00:06:23.772 --rc geninfo_all_blocks=1 00:06:23.772 --rc geninfo_unexecuted_blocks=1 00:06:23.772 00:06:23.772 ' 00:06:23.772 13:40:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:23.772 13:40:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:23.772 13:40:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:23.772 13:40:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.772 13:40:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.772 13:40:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 ************************************ 00:06:23.772 START TEST skip_rpc 00:06:23.772 ************************************ 00:06:23.772 13:40:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:23.772 13:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57092 00:06:23.772 13:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:23.772 13:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.772 13:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:24.030 [2024-10-01 13:40:33.979243] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:24.030 [2024-10-01 13:40:33.979805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57092 ] 00:06:24.030 [2024-10-01 13:40:34.117639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.287 [2024-10-01 13:40:34.236160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.288 [2024-10-01 13:40:34.309170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57092 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57092 ']' 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57092 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57092 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.570 killing process with pid 57092 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57092' 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57092 00:06:29.570 13:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57092 00:06:29.570 ************************************ 00:06:29.570 END TEST skip_rpc 00:06:29.570 ************************************ 00:06:29.570 00:06:29.570 real 0m5.460s 00:06:29.570 user 0m5.055s 00:06:29.570 sys 0m0.304s 00:06:29.570 13:40:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.570 13:40:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.570 13:40:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:29.570 13:40:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.570 13:40:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.570 13:40:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.570 ************************************ 00:06:29.570 START TEST skip_rpc_with_json 00:06:29.570 ************************************ 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57174 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57174 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57174 ']' 00:06:29.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.570 13:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.570 [2024-10-01 13:40:39.477086] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:29.570 [2024-10-01 13:40:39.477189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57174 ] 00:06:29.570 [2024-10-01 13:40:39.614630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.570 [2024-10-01 13:40:39.746686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.828 [2024-10-01 13:40:39.820989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.759 [2024-10-01 13:40:40.585501] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:30.759 request: 00:06:30.759 { 00:06:30.759 "trtype": "tcp", 00:06:30.759 "method": "nvmf_get_transports", 00:06:30.759 "req_id": 1 00:06:30.759 } 00:06:30.759 Got JSON-RPC error response 00:06:30.759 response: 00:06:30.759 { 00:06:30.759 "code": -19, 00:06:30.759 "message": "No such device" 00:06:30.759 } 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.759 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.759 [2024-10-01 13:40:40.597617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:30.760 { 00:06:30.760 "subsystems": [ 00:06:30.760 { 00:06:30.760 "subsystem": "fsdev", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "fsdev_set_opts", 00:06:30.760 "params": { 00:06:30.760 "fsdev_io_pool_size": 65535, 00:06:30.760 "fsdev_io_cache_size": 256 00:06:30.760 } 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "keyring", 00:06:30.760 "config": [] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "iobuf", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "iobuf_set_options", 00:06:30.760 "params": { 00:06:30.760 "small_pool_count": 8192, 00:06:30.760 "large_pool_count": 1024, 00:06:30.760 "small_bufsize": 8192, 00:06:30.760 "large_bufsize": 135168 00:06:30.760 } 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "sock", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "sock_set_default_impl", 00:06:30.760 "params": { 00:06:30.760 "impl_name": "uring" 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "sock_impl_set_options", 00:06:30.760 "params": { 00:06:30.760 "impl_name": "ssl", 00:06:30.760 "recv_buf_size": 4096, 00:06:30.760 "send_buf_size": 4096, 00:06:30.760 "enable_recv_pipe": true, 00:06:30.760 "enable_quickack": false, 00:06:30.760 "enable_placement_id": 0, 00:06:30.760 "enable_zerocopy_send_server": true, 00:06:30.760 "enable_zerocopy_send_client": false, 00:06:30.760 "zerocopy_threshold": 0, 00:06:30.760 "tls_version": 0, 00:06:30.760 "enable_ktls": false 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "sock_impl_set_options", 00:06:30.760 "params": { 00:06:30.760 "impl_name": "posix", 00:06:30.760 "recv_buf_size": 2097152, 00:06:30.760 "send_buf_size": 2097152, 00:06:30.760 "enable_recv_pipe": true, 00:06:30.760 "enable_quickack": false, 00:06:30.760 "enable_placement_id": 0, 00:06:30.760 "enable_zerocopy_send_server": true, 00:06:30.760 "enable_zerocopy_send_client": false, 00:06:30.760 "zerocopy_threshold": 0, 00:06:30.760 "tls_version": 0, 00:06:30.760 "enable_ktls": false 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "sock_impl_set_options", 00:06:30.760 "params": { 00:06:30.760 "impl_name": "uring", 00:06:30.760 "recv_buf_size": 2097152, 00:06:30.760 "send_buf_size": 2097152, 00:06:30.760 "enable_recv_pipe": true, 00:06:30.760 "enable_quickack": false, 00:06:30.760 "enable_placement_id": 0, 00:06:30.760 "enable_zerocopy_send_server": false, 00:06:30.760 "enable_zerocopy_send_client": false, 00:06:30.760 "zerocopy_threshold": 0, 00:06:30.760 "tls_version": 0, 00:06:30.760 "enable_ktls": false 00:06:30.760 } 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "vmd", 00:06:30.760 "config": [] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "accel", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "accel_set_options", 00:06:30.760 "params": { 00:06:30.760 "small_cache_size": 128, 00:06:30.760 "large_cache_size": 16, 00:06:30.760 "task_count": 2048, 00:06:30.760 "sequence_count": 2048, 00:06:30.760 "buf_count": 2048 00:06:30.760 } 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "bdev", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "bdev_set_options", 00:06:30.760 "params": { 00:06:30.760 "bdev_io_pool_size": 65535, 00:06:30.760 "bdev_io_cache_size": 256, 00:06:30.760 "bdev_auto_examine": true, 00:06:30.760 "iobuf_small_cache_size": 128, 00:06:30.760 "iobuf_large_cache_size": 16 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "bdev_raid_set_options", 00:06:30.760 "params": { 00:06:30.760 "process_window_size_kb": 1024, 00:06:30.760 "process_max_bandwidth_mb_sec": 0 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "bdev_iscsi_set_options", 00:06:30.760 "params": { 00:06:30.760 "timeout_sec": 30 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "bdev_nvme_set_options", 00:06:30.760 "params": { 00:06:30.760 "action_on_timeout": "none", 00:06:30.760 "timeout_us": 0, 00:06:30.760 "timeout_admin_us": 0, 00:06:30.760 "keep_alive_timeout_ms": 10000, 00:06:30.760 "arbitration_burst": 0, 00:06:30.760 "low_priority_weight": 0, 00:06:30.760 "medium_priority_weight": 0, 00:06:30.760 "high_priority_weight": 0, 00:06:30.760 "nvme_adminq_poll_period_us": 10000, 00:06:30.760 "nvme_ioq_poll_period_us": 0, 00:06:30.760 "io_queue_requests": 0, 00:06:30.760 "delay_cmd_submit": true, 00:06:30.760 "transport_retry_count": 4, 00:06:30.760 "bdev_retry_count": 3, 00:06:30.760 "transport_ack_timeout": 0, 00:06:30.760 "ctrlr_loss_timeout_sec": 0, 00:06:30.760 "reconnect_delay_sec": 0, 00:06:30.760 "fast_io_fail_timeout_sec": 0, 00:06:30.760 "disable_auto_failback": false, 00:06:30.760 "generate_uuids": false, 00:06:30.760 "transport_tos": 0, 00:06:30.760 "nvme_error_stat": false, 00:06:30.760 "rdma_srq_size": 0, 00:06:30.760 "io_path_stat": false, 00:06:30.760 "allow_accel_sequence": false, 00:06:30.760 "rdma_max_cq_size": 0, 00:06:30.760 "rdma_cm_event_timeout_ms": 0, 00:06:30.760 "dhchap_digests": [ 00:06:30.760 "sha256", 00:06:30.760 "sha384", 00:06:30.760 "sha512" 00:06:30.760 ], 00:06:30.760 "dhchap_dhgroups": [ 00:06:30.760 "null", 00:06:30.760 "ffdhe2048", 00:06:30.760 "ffdhe3072", 00:06:30.760 "ffdhe4096", 00:06:30.760 "ffdhe6144", 00:06:30.760 "ffdhe8192" 00:06:30.760 ] 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "bdev_nvme_set_hotplug", 00:06:30.760 "params": { 00:06:30.760 "period_us": 100000, 00:06:30.760 "enable": false 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "bdev_wait_for_examine" 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "scsi", 00:06:30.760 "config": null 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "scheduler", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "framework_set_scheduler", 00:06:30.760 "params": { 00:06:30.760 "name": "static" 00:06:30.760 } 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "vhost_scsi", 00:06:30.760 "config": [] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "vhost_blk", 00:06:30.760 "config": [] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "ublk", 00:06:30.760 "config": [] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "nbd", 00:06:30.760 "config": [] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "nvmf", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "nvmf_set_config", 00:06:30.760 "params": { 00:06:30.760 "discovery_filter": "match_any", 00:06:30.760 "admin_cmd_passthru": { 00:06:30.760 "identify_ctrlr": false 00:06:30.760 }, 00:06:30.760 "dhchap_digests": [ 00:06:30.760 "sha256", 00:06:30.760 "sha384", 00:06:30.760 "sha512" 00:06:30.760 ], 00:06:30.760 "dhchap_dhgroups": [ 00:06:30.760 "null", 00:06:30.760 "ffdhe2048", 00:06:30.760 "ffdhe3072", 00:06:30.760 "ffdhe4096", 00:06:30.760 "ffdhe6144", 00:06:30.760 "ffdhe8192" 00:06:30.760 ] 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "nvmf_set_max_subsystems", 00:06:30.760 "params": { 00:06:30.760 "max_subsystems": 1024 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "nvmf_set_crdt", 00:06:30.760 "params": { 00:06:30.760 "crdt1": 0, 00:06:30.760 "crdt2": 0, 00:06:30.760 "crdt3": 0 00:06:30.760 } 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "method": "nvmf_create_transport", 00:06:30.760 "params": { 00:06:30.760 "trtype": "TCP", 00:06:30.760 "max_queue_depth": 128, 00:06:30.760 "max_io_qpairs_per_ctrlr": 127, 00:06:30.760 "in_capsule_data_size": 4096, 00:06:30.760 "max_io_size": 131072, 00:06:30.760 "io_unit_size": 131072, 00:06:30.760 "max_aq_depth": 128, 00:06:30.760 "num_shared_buffers": 511, 00:06:30.760 "buf_cache_size": 4294967295, 00:06:30.760 "dif_insert_or_strip": false, 00:06:30.760 "zcopy": false, 00:06:30.760 "c2h_success": true, 00:06:30.760 "sock_priority": 0, 00:06:30.760 "abort_timeout_sec": 1, 00:06:30.760 "ack_timeout": 0, 00:06:30.760 "data_wr_pool_size": 0 00:06:30.760 } 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 }, 00:06:30.760 { 00:06:30.760 "subsystem": "iscsi", 00:06:30.760 "config": [ 00:06:30.760 { 00:06:30.760 "method": "iscsi_set_options", 00:06:30.760 "params": { 00:06:30.760 "node_base": "iqn.2016-06.io.spdk", 00:06:30.760 "max_sessions": 128, 00:06:30.760 "max_connections_per_session": 2, 00:06:30.760 "max_queue_depth": 64, 00:06:30.760 "default_time2wait": 2, 00:06:30.760 "default_time2retain": 20, 00:06:30.760 "first_burst_length": 8192, 00:06:30.760 "immediate_data": true, 00:06:30.760 "allow_duplicated_isid": false, 00:06:30.760 "error_recovery_level": 0, 00:06:30.760 "nop_timeout": 60, 00:06:30.760 "nop_in_interval": 30, 00:06:30.760 "disable_chap": false, 00:06:30.760 "require_chap": false, 00:06:30.760 "mutual_chap": false, 00:06:30.760 "chap_group": 0, 00:06:30.760 "max_large_datain_per_connection": 64, 00:06:30.760 "max_r2t_per_connection": 4, 00:06:30.760 "pdu_pool_size": 36864, 00:06:30.760 "immediate_data_pool_size": 16384, 00:06:30.760 "data_out_pool_size": 2048 00:06:30.760 } 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 } 00:06:30.760 ] 00:06:30.760 } 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57174 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57174 ']' 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57174 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57174 00:06:30.760 killing process with pid 57174 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57174' 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57174 00:06:30.760 13:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57174 00:06:31.325 13:40:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57207 00:06:31.325 13:40:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:31.325 13:40:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57207 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57207 ']' 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57207 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57207 00:06:36.637 killing process with pid 57207 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57207' 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57207 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57207 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:36.637 00:06:36.637 real 0m7.256s 00:06:36.637 user 0m7.085s 00:06:36.637 sys 0m0.678s 00:06:36.637 ************************************ 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.637 END TEST skip_rpc_with_json 00:06:36.637 ************************************ 00:06:36.637 13:40:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:36.637 13:40:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.637 13:40:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.637 13:40:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.637 ************************************ 00:06:36.637 START TEST skip_rpc_with_delay 00:06:36.637 ************************************ 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.637 [2024-10-01 13:40:46.778193] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:36.637 [2024-10-01 13:40:46.778573] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.637 00:06:36.637 real 0m0.081s 00:06:36.637 user 0m0.048s 00:06:36.637 sys 0m0.031s 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.637 ************************************ 00:06:36.637 END TEST skip_rpc_with_delay 00:06:36.637 ************************************ 00:06:36.637 13:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:36.895 13:40:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:36.895 13:40:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:36.895 13:40:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:36.895 13:40:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.895 13:40:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.895 13:40:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.895 ************************************ 00:06:36.895 START TEST exit_on_failed_rpc_init 00:06:36.895 ************************************ 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:36.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57317 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57317 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57317 ']' 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.895 13:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:36.895 [2024-10-01 13:40:46.934577] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:36.895 [2024-10-01 13:40:46.934754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57317 ] 00:06:37.153 [2024-10-01 13:40:47.077530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.153 [2024-10-01 13:40:47.199058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.153 [2024-10-01 13:40:47.269770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.411 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.412 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.412 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.412 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.412 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.412 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:37.412 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.412 [2024-10-01 13:40:47.529418] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:37.412 [2024-10-01 13:40:47.529515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57327 ] 00:06:37.670 [2024-10-01 13:40:47.666631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.670 [2024-10-01 13:40:47.804278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.670 [2024-10-01 13:40:47.804381] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:37.671 [2024-10-01 13:40:47.804399] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:37.671 [2024-10-01 13:40:47.804409] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57317 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57317 ']' 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57317 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57317 00:06:37.929 killing process with pid 57317 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57317' 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57317 00:06:37.929 13:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57317 00:06:38.494 00:06:38.494 real 0m1.539s 00:06:38.494 user 0m1.795s 00:06:38.494 sys 0m0.403s 00:06:38.494 13:40:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.494 ************************************ 00:06:38.494 END TEST exit_on_failed_rpc_init 00:06:38.494 ************************************ 00:06:38.494 13:40:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:38.494 13:40:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:38.494 00:06:38.494 real 0m14.711s 00:06:38.494 user 0m14.150s 00:06:38.494 sys 0m1.617s 00:06:38.494 13:40:48 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.494 13:40:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.494 ************************************ 00:06:38.494 END TEST skip_rpc 00:06:38.494 ************************************ 00:06:38.494 13:40:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:38.494 13:40:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.494 13:40:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.494 13:40:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.494 ************************************ 00:06:38.494 START TEST rpc_client 00:06:38.494 ************************************ 00:06:38.494 13:40:48 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:38.494 * Looking for test storage... 00:06:38.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:38.494 13:40:48 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.494 13:40:48 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.494 13:40:48 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.494 13:40:48 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:38.494 13:40:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.495 13:40:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.495 13:40:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.495 13:40:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:38.495 13:40:48 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.495 13:40:48 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.495 --rc genhtml_branch_coverage=1 00:06:38.495 --rc genhtml_function_coverage=1 00:06:38.495 --rc genhtml_legend=1 00:06:38.495 --rc geninfo_all_blocks=1 00:06:38.495 --rc geninfo_unexecuted_blocks=1 00:06:38.495 00:06:38.495 ' 00:06:38.495 13:40:48 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.495 --rc genhtml_branch_coverage=1 00:06:38.495 --rc genhtml_function_coverage=1 00:06:38.495 --rc genhtml_legend=1 00:06:38.495 --rc geninfo_all_blocks=1 00:06:38.495 --rc geninfo_unexecuted_blocks=1 00:06:38.495 00:06:38.495 ' 00:06:38.495 13:40:48 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.495 --rc genhtml_branch_coverage=1 00:06:38.495 --rc genhtml_function_coverage=1 00:06:38.495 --rc genhtml_legend=1 00:06:38.495 --rc geninfo_all_blocks=1 00:06:38.495 --rc geninfo_unexecuted_blocks=1 00:06:38.495 00:06:38.495 ' 00:06:38.495 13:40:48 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.495 --rc genhtml_branch_coverage=1 00:06:38.495 --rc genhtml_function_coverage=1 00:06:38.495 --rc genhtml_legend=1 00:06:38.495 --rc geninfo_all_blocks=1 00:06:38.495 --rc geninfo_unexecuted_blocks=1 00:06:38.495 00:06:38.495 ' 00:06:38.495 13:40:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:38.753 OK 00:06:38.753 13:40:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:38.753 00:06:38.753 real 0m0.212s 00:06:38.753 user 0m0.136s 00:06:38.753 sys 0m0.083s 00:06:38.753 13:40:48 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.753 13:40:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:38.753 ************************************ 00:06:38.753 END TEST rpc_client 00:06:38.753 ************************************ 00:06:38.753 13:40:48 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:38.753 13:40:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.753 13:40:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.753 13:40:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.753 ************************************ 00:06:38.753 START TEST json_config 00:06:38.753 ************************************ 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.753 13:40:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.753 13:40:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.753 13:40:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.753 13:40:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.753 13:40:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.753 13:40:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.753 13:40:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.753 13:40:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:38.753 13:40:48 json_config -- scripts/common.sh@345 -- # : 1 00:06:38.753 13:40:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.753 13:40:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.753 13:40:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:38.753 13:40:48 json_config -- scripts/common.sh@353 -- # local d=1 00:06:38.753 13:40:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.753 13:40:48 json_config -- scripts/common.sh@355 -- # echo 1 00:06:38.753 13:40:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.753 13:40:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@353 -- # local d=2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.753 13:40:48 json_config -- scripts/common.sh@355 -- # echo 2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.753 13:40:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.753 13:40:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.753 13:40:48 json_config -- scripts/common.sh@368 -- # return 0 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.753 --rc genhtml_branch_coverage=1 00:06:38.753 --rc genhtml_function_coverage=1 00:06:38.753 --rc genhtml_legend=1 00:06:38.753 --rc geninfo_all_blocks=1 00:06:38.753 --rc geninfo_unexecuted_blocks=1 00:06:38.753 00:06:38.753 ' 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.753 --rc genhtml_branch_coverage=1 00:06:38.753 --rc genhtml_function_coverage=1 00:06:38.753 --rc genhtml_legend=1 00:06:38.753 --rc geninfo_all_blocks=1 00:06:38.753 --rc geninfo_unexecuted_blocks=1 00:06:38.753 00:06:38.753 ' 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.753 --rc genhtml_branch_coverage=1 00:06:38.753 --rc genhtml_function_coverage=1 00:06:38.753 --rc genhtml_legend=1 00:06:38.753 --rc geninfo_all_blocks=1 00:06:38.753 --rc geninfo_unexecuted_blocks=1 00:06:38.753 00:06:38.753 ' 00:06:38.753 13:40:48 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.753 --rc genhtml_branch_coverage=1 00:06:38.753 --rc genhtml_function_coverage=1 00:06:38.753 --rc genhtml_legend=1 00:06:38.753 --rc geninfo_all_blocks=1 00:06:38.753 --rc geninfo_unexecuted_blocks=1 00:06:38.753 00:06:38.753 ' 00:06:38.753 13:40:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.753 13:40:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.753 13:40:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.753 13:40:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.753 13:40:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.753 13:40:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.753 13:40:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.754 13:40:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.754 13:40:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.754 13:40:48 json_config -- paths/export.sh@5 -- # export PATH 00:06:38.754 13:40:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@51 -- # : 0 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.754 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.754 13:40:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:38.754 INFO: JSON configuration test init 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:38.754 13:40:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.754 13:40:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 13:40:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:38.754 13:40:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.754 13:40:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.011 13:40:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:39.011 13:40:48 json_config -- json_config/common.sh@9 -- # local app=target 00:06:39.011 13:40:48 json_config -- json_config/common.sh@10 -- # shift 00:06:39.011 13:40:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:39.011 13:40:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:39.011 13:40:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:39.011 13:40:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.011 13:40:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.011 13:40:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57467 00:06:39.011 13:40:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:39.011 Waiting for target to run... 00:06:39.011 13:40:48 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:39.011 13:40:48 json_config -- json_config/common.sh@25 -- # waitforlisten 57467 /var/tmp/spdk_tgt.sock 00:06:39.011 13:40:48 json_config -- common/autotest_common.sh@831 -- # '[' -z 57467 ']' 00:06:39.011 13:40:48 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:39.011 13:40:48 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.011 13:40:48 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:39.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:39.011 13:40:48 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.011 13:40:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.011 [2024-10-01 13:40:49.004441] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:39.011 [2024-10-01 13:40:49.004788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57467 ] 00:06:39.268 [2024-10-01 13:40:49.420820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.527 [2024-10-01 13:40:49.536112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.095 00:06:40.095 13:40:50 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.095 13:40:50 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:40.095 13:40:50 json_config -- json_config/common.sh@26 -- # echo '' 00:06:40.095 13:40:50 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:40.095 13:40:50 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:40.095 13:40:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.095 13:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.095 13:40:50 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:40.095 13:40:50 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:40.095 13:40:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.095 13:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.095 13:40:50 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:40.095 13:40:50 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:40.095 13:40:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:40.353 [2024-10-01 13:40:50.408647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:40.611 13:40:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.611 13:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:40.611 13:40:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:40.611 13:40:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@54 -- # sort 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:40.869 13:40:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.869 13:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:40.869 13:40:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.869 13:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:40.869 13:40:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:40.869 13:40:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:41.128 MallocForNvmf0 00:06:41.128 13:40:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:41.128 13:40:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:41.386 MallocForNvmf1 00:06:41.386 13:40:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:41.386 13:40:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:41.645 [2024-10-01 13:40:51.797380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.645 13:40:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:41.645 13:40:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:42.212 13:40:52 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:42.212 13:40:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:42.212 13:40:52 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:42.212 13:40:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:42.470 13:40:52 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:42.470 13:40:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:42.728 [2024-10-01 13:40:52.886016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:42.986 13:40:52 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:42.986 13:40:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.986 13:40:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.986 13:40:52 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:42.986 13:40:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.986 13:40:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.986 13:40:52 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:42.986 13:40:52 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:42.986 13:40:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:43.244 MallocBdevForConfigChangeCheck 00:06:43.244 13:40:53 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:43.244 13:40:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.244 13:40:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.244 13:40:53 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:43.244 13:40:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:43.839 INFO: shutting down applications... 00:06:43.840 13:40:53 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:43.840 13:40:53 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:43.840 13:40:53 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:43.840 13:40:53 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:43.840 13:40:53 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:44.098 Calling clear_iscsi_subsystem 00:06:44.098 Calling clear_nvmf_subsystem 00:06:44.098 Calling clear_nbd_subsystem 00:06:44.098 Calling clear_ublk_subsystem 00:06:44.098 Calling clear_vhost_blk_subsystem 00:06:44.098 Calling clear_vhost_scsi_subsystem 00:06:44.098 Calling clear_bdev_subsystem 00:06:44.098 13:40:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:44.098 13:40:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:44.098 13:40:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:44.098 13:40:54 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.098 13:40:54 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:44.098 13:40:54 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:44.664 13:40:54 json_config -- json_config/json_config.sh@352 -- # break 00:06:44.664 13:40:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:44.664 13:40:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:44.664 13:40:54 json_config -- json_config/common.sh@31 -- # local app=target 00:06:44.664 13:40:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:44.664 13:40:54 json_config -- json_config/common.sh@35 -- # [[ -n 57467 ]] 00:06:44.664 13:40:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57467 00:06:44.664 13:40:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:44.664 13:40:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.664 13:40:54 json_config -- json_config/common.sh@41 -- # kill -0 57467 00:06:44.664 13:40:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.923 13:40:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.923 SPDK target shutdown done 00:06:44.923 INFO: relaunching applications... 00:06:44.923 Waiting for target to run... 00:06:44.923 13:40:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.923 13:40:55 json_config -- json_config/common.sh@41 -- # kill -0 57467 00:06:44.923 13:40:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:44.923 13:40:55 json_config -- json_config/common.sh@43 -- # break 00:06:44.923 13:40:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:44.923 13:40:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:44.923 13:40:55 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:44.923 13:40:55 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.923 13:40:55 json_config -- json_config/common.sh@9 -- # local app=target 00:06:44.923 13:40:55 json_config -- json_config/common.sh@10 -- # shift 00:06:44.923 13:40:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.923 13:40:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.923 13:40:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.923 13:40:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.923 13:40:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.923 13:40:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57668 00:06:44.923 13:40:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.923 13:40:55 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.923 13:40:55 json_config -- json_config/common.sh@25 -- # waitforlisten 57668 /var/tmp/spdk_tgt.sock 00:06:44.923 13:40:55 json_config -- common/autotest_common.sh@831 -- # '[' -z 57668 ']' 00:06:44.923 13:40:55 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.923 13:40:55 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.923 13:40:55 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.923 13:40:55 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.923 13:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.182 [2024-10-01 13:40:55.140686] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:45.182 [2024-10-01 13:40:55.141153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57668 ] 00:06:45.440 [2024-10-01 13:40:55.558948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.699 [2024-10-01 13:40:55.655261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.699 [2024-10-01 13:40:55.793281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.968 [2024-10-01 13:40:56.013167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.968 [2024-10-01 13:40:56.045257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:45.968 13:40:56 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.968 13:40:56 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:45.968 13:40:56 json_config -- json_config/common.sh@26 -- # echo '' 00:06:45.968 00:06:45.968 INFO: Checking if target configuration is the same... 00:06:45.968 13:40:56 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:45.968 13:40:56 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:45.968 13:40:56 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:45.968 13:40:56 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:45.968 13:40:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:45.968 + '[' 2 -ne 2 ']' 00:06:45.968 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:45.968 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:46.226 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:46.226 +++ basename /dev/fd/62 00:06:46.226 ++ mktemp /tmp/62.XXX 00:06:46.226 + tmp_file_1=/tmp/62.FJb 00:06:46.226 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:46.226 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:46.226 + tmp_file_2=/tmp/spdk_tgt_config.json.hwo 00:06:46.226 + ret=0 00:06:46.226 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:46.484 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:46.484 + diff -u /tmp/62.FJb /tmp/spdk_tgt_config.json.hwo 00:06:46.484 INFO: JSON config files are the same 00:06:46.484 + echo 'INFO: JSON config files are the same' 00:06:46.484 + rm /tmp/62.FJb /tmp/spdk_tgt_config.json.hwo 00:06:46.484 + exit 0 00:06:46.484 INFO: changing configuration and checking if this can be detected... 00:06:46.484 13:40:56 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:46.484 13:40:56 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:46.484 13:40:56 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:46.484 13:40:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:46.743 13:40:56 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:46.743 13:40:56 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:46.743 13:40:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:46.743 + '[' 2 -ne 2 ']' 00:06:46.743 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:46.743 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:46.743 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:46.743 +++ basename /dev/fd/62 00:06:46.743 ++ mktemp /tmp/62.XXX 00:06:46.743 + tmp_file_1=/tmp/62.UUJ 00:06:46.743 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:46.743 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:46.743 + tmp_file_2=/tmp/spdk_tgt_config.json.uvP 00:06:46.743 + ret=0 00:06:46.743 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:47.309 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:47.309 + diff -u /tmp/62.UUJ /tmp/spdk_tgt_config.json.uvP 00:06:47.309 + ret=1 00:06:47.309 + echo '=== Start of file: /tmp/62.UUJ ===' 00:06:47.309 + cat /tmp/62.UUJ 00:06:47.309 + echo '=== End of file: /tmp/62.UUJ ===' 00:06:47.309 + echo '' 00:06:47.309 + echo '=== Start of file: /tmp/spdk_tgt_config.json.uvP ===' 00:06:47.309 + cat /tmp/spdk_tgt_config.json.uvP 00:06:47.309 + echo '=== End of file: /tmp/spdk_tgt_config.json.uvP ===' 00:06:47.309 + echo '' 00:06:47.309 + rm /tmp/62.UUJ /tmp/spdk_tgt_config.json.uvP 00:06:47.309 + exit 1 00:06:47.309 INFO: configuration change detected. 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:47.309 13:40:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.309 13:40:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@324 -- # [[ -n 57668 ]] 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:47.309 13:40:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.309 13:40:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:47.309 13:40:57 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:47.309 13:40:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.310 13:40:57 json_config -- json_config/json_config.sh@330 -- # killprocess 57668 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@950 -- # '[' -z 57668 ']' 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@954 -- # kill -0 57668 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@955 -- # uname 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57668 00:06:47.310 killing process with pid 57668 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57668' 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@969 -- # kill 57668 00:06:47.310 13:40:57 json_config -- common/autotest_common.sh@974 -- # wait 57668 00:06:47.568 13:40:57 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:47.568 13:40:57 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:47.568 13:40:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.568 13:40:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.826 INFO: Success 00:06:47.826 13:40:57 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:47.826 13:40:57 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:47.826 ************************************ 00:06:47.826 END TEST json_config 00:06:47.826 ************************************ 00:06:47.826 00:06:47.826 real 0m9.000s 00:06:47.826 user 0m13.083s 00:06:47.826 sys 0m1.745s 00:06:47.826 13:40:57 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.826 13:40:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.826 13:40:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:47.826 13:40:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.826 13:40:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.826 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:06:47.826 ************************************ 00:06:47.826 START TEST json_config_extra_key 00:06:47.826 ************************************ 00:06:47.826 13:40:57 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:47.826 13:40:57 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.826 13:40:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.826 13:40:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.826 13:40:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.826 13:40:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.827 --rc genhtml_branch_coverage=1 00:06:47.827 --rc genhtml_function_coverage=1 00:06:47.827 --rc genhtml_legend=1 00:06:47.827 --rc geninfo_all_blocks=1 00:06:47.827 --rc geninfo_unexecuted_blocks=1 00:06:47.827 00:06:47.827 ' 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.827 --rc genhtml_branch_coverage=1 00:06:47.827 --rc genhtml_function_coverage=1 00:06:47.827 --rc genhtml_legend=1 00:06:47.827 --rc geninfo_all_blocks=1 00:06:47.827 --rc geninfo_unexecuted_blocks=1 00:06:47.827 00:06:47.827 ' 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.827 --rc genhtml_branch_coverage=1 00:06:47.827 --rc genhtml_function_coverage=1 00:06:47.827 --rc genhtml_legend=1 00:06:47.827 --rc geninfo_all_blocks=1 00:06:47.827 --rc geninfo_unexecuted_blocks=1 00:06:47.827 00:06:47.827 ' 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.827 --rc genhtml_branch_coverage=1 00:06:47.827 --rc genhtml_function_coverage=1 00:06:47.827 --rc genhtml_legend=1 00:06:47.827 --rc geninfo_all_blocks=1 00:06:47.827 --rc geninfo_unexecuted_blocks=1 00:06:47.827 00:06:47.827 ' 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.827 13:40:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.827 13:40:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.827 13:40:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.827 13:40:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.827 13:40:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:47.827 13:40:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.827 13:40:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:47.827 INFO: launching applications... 00:06:47.827 Waiting for target to run... 00:06:47.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:47.827 13:40:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57821 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:47.827 13:40:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57821 /var/tmp/spdk_tgt.sock 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57821 ']' 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:47.827 13:40:57 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.828 13:40:57 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:47.828 13:40:57 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.828 13:40:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.085 [2024-10-01 13:40:58.036421] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:48.085 [2024-10-01 13:40:58.036518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57821 ] 00:06:48.343 [2024-10-01 13:40:58.452724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.601 [2024-10-01 13:40:58.555805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.601 [2024-10-01 13:40:58.589126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.168 00:06:49.168 INFO: shutting down applications... 00:06:49.168 13:40:59 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.168 13:40:59 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:49.168 13:40:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:49.168 13:40:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57821 ]] 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57821 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57821 00:06:49.168 13:40:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:49.769 13:40:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:49.769 13:40:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.769 13:40:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57821 00:06:49.769 13:40:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:49.769 13:40:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:49.769 SPDK target shutdown done 00:06:49.769 13:40:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:49.769 13:40:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:49.769 Success 00:06:49.769 13:40:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:49.769 00:06:49.769 real 0m1.808s 00:06:49.769 user 0m1.799s 00:06:49.769 sys 0m0.448s 00:06:49.769 13:40:59 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.769 13:40:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:49.770 ************************************ 00:06:49.770 END TEST json_config_extra_key 00:06:49.770 ************************************ 00:06:49.770 13:40:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:49.770 13:40:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.770 13:40:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.770 13:40:59 -- common/autotest_common.sh@10 -- # set +x 00:06:49.770 ************************************ 00:06:49.770 START TEST alias_rpc 00:06:49.770 ************************************ 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:49.770 * Looking for test storage... 00:06:49.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.770 13:40:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.770 --rc genhtml_branch_coverage=1 00:06:49.770 --rc genhtml_function_coverage=1 00:06:49.770 --rc genhtml_legend=1 00:06:49.770 --rc geninfo_all_blocks=1 00:06:49.770 --rc geninfo_unexecuted_blocks=1 00:06:49.770 00:06:49.770 ' 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.770 --rc genhtml_branch_coverage=1 00:06:49.770 --rc genhtml_function_coverage=1 00:06:49.770 --rc genhtml_legend=1 00:06:49.770 --rc geninfo_all_blocks=1 00:06:49.770 --rc geninfo_unexecuted_blocks=1 00:06:49.770 00:06:49.770 ' 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.770 --rc genhtml_branch_coverage=1 00:06:49.770 --rc genhtml_function_coverage=1 00:06:49.770 --rc genhtml_legend=1 00:06:49.770 --rc geninfo_all_blocks=1 00:06:49.770 --rc geninfo_unexecuted_blocks=1 00:06:49.770 00:06:49.770 ' 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.770 --rc genhtml_branch_coverage=1 00:06:49.770 --rc genhtml_function_coverage=1 00:06:49.770 --rc genhtml_legend=1 00:06:49.770 --rc geninfo_all_blocks=1 00:06:49.770 --rc geninfo_unexecuted_blocks=1 00:06:49.770 00:06:49.770 ' 00:06:49.770 13:40:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.770 13:40:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57894 00:06:49.770 13:40:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.770 13:40:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57894 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57894 ']' 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.770 13:40:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.770 [2024-10-01 13:40:59.943317] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:49.770 [2024-10-01 13:40:59.943450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57894 ] 00:06:50.028 [2024-10-01 13:41:00.083359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.286 [2024-10-01 13:41:00.223279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.286 [2024-10-01 13:41:00.306767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.853 13:41:01 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.853 13:41:01 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.853 13:41:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:51.420 13:41:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57894 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57894 ']' 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57894 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57894 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.420 killing process with pid 57894 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57894' 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@969 -- # kill 57894 00:06:51.420 13:41:01 alias_rpc -- common/autotest_common.sh@974 -- # wait 57894 00:06:51.986 00:06:51.986 real 0m2.382s 00:06:51.986 user 0m2.658s 00:06:51.986 sys 0m0.547s 00:06:51.986 13:41:02 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.986 13:41:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.986 ************************************ 00:06:51.986 END TEST alias_rpc 00:06:51.986 ************************************ 00:06:51.986 13:41:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:51.986 13:41:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:51.986 13:41:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.986 13:41:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.986 13:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:51.986 ************************************ 00:06:51.986 START TEST spdkcli_tcp 00:06:51.986 ************************************ 00:06:51.986 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:52.244 * Looking for test storage... 00:06:52.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.244 13:41:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:52.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.244 --rc genhtml_branch_coverage=1 00:06:52.244 --rc genhtml_function_coverage=1 00:06:52.244 --rc genhtml_legend=1 00:06:52.244 --rc geninfo_all_blocks=1 00:06:52.244 --rc geninfo_unexecuted_blocks=1 00:06:52.244 00:06:52.244 ' 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:52.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.244 --rc genhtml_branch_coverage=1 00:06:52.244 --rc genhtml_function_coverage=1 00:06:52.244 --rc genhtml_legend=1 00:06:52.244 --rc geninfo_all_blocks=1 00:06:52.244 --rc geninfo_unexecuted_blocks=1 00:06:52.244 00:06:52.244 ' 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:52.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.244 --rc genhtml_branch_coverage=1 00:06:52.244 --rc genhtml_function_coverage=1 00:06:52.244 --rc genhtml_legend=1 00:06:52.244 --rc geninfo_all_blocks=1 00:06:52.244 --rc geninfo_unexecuted_blocks=1 00:06:52.244 00:06:52.244 ' 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:52.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.244 --rc genhtml_branch_coverage=1 00:06:52.244 --rc genhtml_function_coverage=1 00:06:52.244 --rc genhtml_legend=1 00:06:52.244 --rc geninfo_all_blocks=1 00:06:52.244 --rc geninfo_unexecuted_blocks=1 00:06:52.244 00:06:52.244 ' 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57984 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:52.244 13:41:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57984 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57984 ']' 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.244 13:41:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.244 [2024-10-01 13:41:02.349769] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:52.245 [2024-10-01 13:41:02.350677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57984 ] 00:06:52.503 [2024-10-01 13:41:02.491273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.503 [2024-10-01 13:41:02.650076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.503 [2024-10-01 13:41:02.650086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.760 [2024-10-01 13:41:02.726829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.336 13:41:03 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.336 13:41:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:53.336 13:41:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58001 00:06:53.336 13:41:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:53.336 13:41:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:53.595 [ 00:06:53.595 "bdev_malloc_delete", 00:06:53.595 "bdev_malloc_create", 00:06:53.595 "bdev_null_resize", 00:06:53.595 "bdev_null_delete", 00:06:53.595 "bdev_null_create", 00:06:53.595 "bdev_nvme_cuse_unregister", 00:06:53.595 "bdev_nvme_cuse_register", 00:06:53.595 "bdev_opal_new_user", 00:06:53.595 "bdev_opal_set_lock_state", 00:06:53.595 "bdev_opal_delete", 00:06:53.595 "bdev_opal_get_info", 00:06:53.595 "bdev_opal_create", 00:06:53.595 "bdev_nvme_opal_revert", 00:06:53.595 "bdev_nvme_opal_init", 00:06:53.595 "bdev_nvme_send_cmd", 00:06:53.595 "bdev_nvme_set_keys", 00:06:53.595 "bdev_nvme_get_path_iostat", 00:06:53.595 "bdev_nvme_get_mdns_discovery_info", 00:06:53.595 "bdev_nvme_stop_mdns_discovery", 00:06:53.595 "bdev_nvme_start_mdns_discovery", 00:06:53.595 "bdev_nvme_set_multipath_policy", 00:06:53.595 "bdev_nvme_set_preferred_path", 00:06:53.595 "bdev_nvme_get_io_paths", 00:06:53.595 "bdev_nvme_remove_error_injection", 00:06:53.595 "bdev_nvme_add_error_injection", 00:06:53.595 "bdev_nvme_get_discovery_info", 00:06:53.595 "bdev_nvme_stop_discovery", 00:06:53.595 "bdev_nvme_start_discovery", 00:06:53.595 "bdev_nvme_get_controller_health_info", 00:06:53.595 "bdev_nvme_disable_controller", 00:06:53.595 "bdev_nvme_enable_controller", 00:06:53.595 "bdev_nvme_reset_controller", 00:06:53.595 "bdev_nvme_get_transport_statistics", 00:06:53.595 "bdev_nvme_apply_firmware", 00:06:53.595 "bdev_nvme_detach_controller", 00:06:53.595 "bdev_nvme_get_controllers", 00:06:53.595 "bdev_nvme_attach_controller", 00:06:53.595 "bdev_nvme_set_hotplug", 00:06:53.595 "bdev_nvme_set_options", 00:06:53.595 "bdev_passthru_delete", 00:06:53.595 "bdev_passthru_create", 00:06:53.595 "bdev_lvol_set_parent_bdev", 00:06:53.595 "bdev_lvol_set_parent", 00:06:53.595 "bdev_lvol_check_shallow_copy", 00:06:53.595 "bdev_lvol_start_shallow_copy", 00:06:53.595 "bdev_lvol_grow_lvstore", 00:06:53.595 "bdev_lvol_get_lvols", 00:06:53.595 "bdev_lvol_get_lvstores", 00:06:53.595 "bdev_lvol_delete", 00:06:53.595 "bdev_lvol_set_read_only", 00:06:53.595 "bdev_lvol_resize", 00:06:53.595 "bdev_lvol_decouple_parent", 00:06:53.595 "bdev_lvol_inflate", 00:06:53.595 "bdev_lvol_rename", 00:06:53.595 "bdev_lvol_clone_bdev", 00:06:53.595 "bdev_lvol_clone", 00:06:53.595 "bdev_lvol_snapshot", 00:06:53.595 "bdev_lvol_create", 00:06:53.595 "bdev_lvol_delete_lvstore", 00:06:53.595 "bdev_lvol_rename_lvstore", 00:06:53.595 "bdev_lvol_create_lvstore", 00:06:53.595 "bdev_raid_set_options", 00:06:53.595 "bdev_raid_remove_base_bdev", 00:06:53.595 "bdev_raid_add_base_bdev", 00:06:53.595 "bdev_raid_delete", 00:06:53.595 "bdev_raid_create", 00:06:53.595 "bdev_raid_get_bdevs", 00:06:53.595 "bdev_error_inject_error", 00:06:53.595 "bdev_error_delete", 00:06:53.595 "bdev_error_create", 00:06:53.595 "bdev_split_delete", 00:06:53.595 "bdev_split_create", 00:06:53.595 "bdev_delay_delete", 00:06:53.595 "bdev_delay_create", 00:06:53.595 "bdev_delay_update_latency", 00:06:53.595 "bdev_zone_block_delete", 00:06:53.595 "bdev_zone_block_create", 00:06:53.595 "blobfs_create", 00:06:53.595 "blobfs_detect", 00:06:53.595 "blobfs_set_cache_size", 00:06:53.595 "bdev_aio_delete", 00:06:53.595 "bdev_aio_rescan", 00:06:53.595 "bdev_aio_create", 00:06:53.595 "bdev_ftl_set_property", 00:06:53.595 "bdev_ftl_get_properties", 00:06:53.595 "bdev_ftl_get_stats", 00:06:53.595 "bdev_ftl_unmap", 00:06:53.595 "bdev_ftl_unload", 00:06:53.595 "bdev_ftl_delete", 00:06:53.595 "bdev_ftl_load", 00:06:53.595 "bdev_ftl_create", 00:06:53.595 "bdev_virtio_attach_controller", 00:06:53.595 "bdev_virtio_scsi_get_devices", 00:06:53.595 "bdev_virtio_detach_controller", 00:06:53.595 "bdev_virtio_blk_set_hotplug", 00:06:53.595 "bdev_iscsi_delete", 00:06:53.595 "bdev_iscsi_create", 00:06:53.595 "bdev_iscsi_set_options", 00:06:53.595 "bdev_uring_delete", 00:06:53.595 "bdev_uring_rescan", 00:06:53.595 "bdev_uring_create", 00:06:53.595 "accel_error_inject_error", 00:06:53.595 "ioat_scan_accel_module", 00:06:53.595 "dsa_scan_accel_module", 00:06:53.595 "iaa_scan_accel_module", 00:06:53.595 "keyring_file_remove_key", 00:06:53.595 "keyring_file_add_key", 00:06:53.595 "keyring_linux_set_options", 00:06:53.595 "fsdev_aio_delete", 00:06:53.595 "fsdev_aio_create", 00:06:53.595 "iscsi_get_histogram", 00:06:53.595 "iscsi_enable_histogram", 00:06:53.595 "iscsi_set_options", 00:06:53.595 "iscsi_get_auth_groups", 00:06:53.595 "iscsi_auth_group_remove_secret", 00:06:53.595 "iscsi_auth_group_add_secret", 00:06:53.595 "iscsi_delete_auth_group", 00:06:53.595 "iscsi_create_auth_group", 00:06:53.595 "iscsi_set_discovery_auth", 00:06:53.595 "iscsi_get_options", 00:06:53.595 "iscsi_target_node_request_logout", 00:06:53.595 "iscsi_target_node_set_redirect", 00:06:53.595 "iscsi_target_node_set_auth", 00:06:53.595 "iscsi_target_node_add_lun", 00:06:53.595 "iscsi_get_stats", 00:06:53.595 "iscsi_get_connections", 00:06:53.595 "iscsi_portal_group_set_auth", 00:06:53.595 "iscsi_start_portal_group", 00:06:53.595 "iscsi_delete_portal_group", 00:06:53.595 "iscsi_create_portal_group", 00:06:53.595 "iscsi_get_portal_groups", 00:06:53.595 "iscsi_delete_target_node", 00:06:53.595 "iscsi_target_node_remove_pg_ig_maps", 00:06:53.595 "iscsi_target_node_add_pg_ig_maps", 00:06:53.596 "iscsi_create_target_node", 00:06:53.596 "iscsi_get_target_nodes", 00:06:53.596 "iscsi_delete_initiator_group", 00:06:53.596 "iscsi_initiator_group_remove_initiators", 00:06:53.596 "iscsi_initiator_group_add_initiators", 00:06:53.596 "iscsi_create_initiator_group", 00:06:53.596 "iscsi_get_initiator_groups", 00:06:53.596 "nvmf_set_crdt", 00:06:53.596 "nvmf_set_config", 00:06:53.596 "nvmf_set_max_subsystems", 00:06:53.596 "nvmf_stop_mdns_prr", 00:06:53.596 "nvmf_publish_mdns_prr", 00:06:53.596 "nvmf_subsystem_get_listeners", 00:06:53.596 "nvmf_subsystem_get_qpairs", 00:06:53.596 "nvmf_subsystem_get_controllers", 00:06:53.596 "nvmf_get_stats", 00:06:53.596 "nvmf_get_transports", 00:06:53.596 "nvmf_create_transport", 00:06:53.596 "nvmf_get_targets", 00:06:53.596 "nvmf_delete_target", 00:06:53.596 "nvmf_create_target", 00:06:53.596 "nvmf_subsystem_allow_any_host", 00:06:53.596 "nvmf_subsystem_set_keys", 00:06:53.596 "nvmf_subsystem_remove_host", 00:06:53.596 "nvmf_subsystem_add_host", 00:06:53.596 "nvmf_ns_remove_host", 00:06:53.596 "nvmf_ns_add_host", 00:06:53.596 "nvmf_subsystem_remove_ns", 00:06:53.596 "nvmf_subsystem_set_ns_ana_group", 00:06:53.596 "nvmf_subsystem_add_ns", 00:06:53.596 "nvmf_subsystem_listener_set_ana_state", 00:06:53.596 "nvmf_discovery_get_referrals", 00:06:53.596 "nvmf_discovery_remove_referral", 00:06:53.596 "nvmf_discovery_add_referral", 00:06:53.596 "nvmf_subsystem_remove_listener", 00:06:53.596 "nvmf_subsystem_add_listener", 00:06:53.596 "nvmf_delete_subsystem", 00:06:53.596 "nvmf_create_subsystem", 00:06:53.596 "nvmf_get_subsystems", 00:06:53.596 "env_dpdk_get_mem_stats", 00:06:53.596 "nbd_get_disks", 00:06:53.596 "nbd_stop_disk", 00:06:53.596 "nbd_start_disk", 00:06:53.596 "ublk_recover_disk", 00:06:53.596 "ublk_get_disks", 00:06:53.596 "ublk_stop_disk", 00:06:53.596 "ublk_start_disk", 00:06:53.596 "ublk_destroy_target", 00:06:53.596 "ublk_create_target", 00:06:53.596 "virtio_blk_create_transport", 00:06:53.596 "virtio_blk_get_transports", 00:06:53.596 "vhost_controller_set_coalescing", 00:06:53.596 "vhost_get_controllers", 00:06:53.596 "vhost_delete_controller", 00:06:53.596 "vhost_create_blk_controller", 00:06:53.596 "vhost_scsi_controller_remove_target", 00:06:53.596 "vhost_scsi_controller_add_target", 00:06:53.596 "vhost_start_scsi_controller", 00:06:53.596 "vhost_create_scsi_controller", 00:06:53.596 "thread_set_cpumask", 00:06:53.596 "scheduler_set_options", 00:06:53.596 "framework_get_governor", 00:06:53.596 "framework_get_scheduler", 00:06:53.596 "framework_set_scheduler", 00:06:53.596 "framework_get_reactors", 00:06:53.596 "thread_get_io_channels", 00:06:53.596 "thread_get_pollers", 00:06:53.596 "thread_get_stats", 00:06:53.596 "framework_monitor_context_switch", 00:06:53.596 "spdk_kill_instance", 00:06:53.596 "log_enable_timestamps", 00:06:53.596 "log_get_flags", 00:06:53.596 "log_clear_flag", 00:06:53.596 "log_set_flag", 00:06:53.596 "log_get_level", 00:06:53.596 "log_set_level", 00:06:53.596 "log_get_print_level", 00:06:53.596 "log_set_print_level", 00:06:53.596 "framework_enable_cpumask_locks", 00:06:53.596 "framework_disable_cpumask_locks", 00:06:53.596 "framework_wait_init", 00:06:53.596 "framework_start_init", 00:06:53.596 "scsi_get_devices", 00:06:53.596 "bdev_get_histogram", 00:06:53.596 "bdev_enable_histogram", 00:06:53.596 "bdev_set_qos_limit", 00:06:53.596 "bdev_set_qd_sampling_period", 00:06:53.596 "bdev_get_bdevs", 00:06:53.596 "bdev_reset_iostat", 00:06:53.596 "bdev_get_iostat", 00:06:53.596 "bdev_examine", 00:06:53.596 "bdev_wait_for_examine", 00:06:53.596 "bdev_set_options", 00:06:53.596 "accel_get_stats", 00:06:53.596 "accel_set_options", 00:06:53.596 "accel_set_driver", 00:06:53.596 "accel_crypto_key_destroy", 00:06:53.596 "accel_crypto_keys_get", 00:06:53.596 "accel_crypto_key_create", 00:06:53.596 "accel_assign_opc", 00:06:53.596 "accel_get_module_info", 00:06:53.596 "accel_get_opc_assignments", 00:06:53.596 "vmd_rescan", 00:06:53.596 "vmd_remove_device", 00:06:53.596 "vmd_enable", 00:06:53.596 "sock_get_default_impl", 00:06:53.596 "sock_set_default_impl", 00:06:53.596 "sock_impl_set_options", 00:06:53.596 "sock_impl_get_options", 00:06:53.596 "iobuf_get_stats", 00:06:53.596 "iobuf_set_options", 00:06:53.596 "keyring_get_keys", 00:06:53.596 "framework_get_pci_devices", 00:06:53.596 "framework_get_config", 00:06:53.596 "framework_get_subsystems", 00:06:53.596 "fsdev_set_opts", 00:06:53.596 "fsdev_get_opts", 00:06:53.596 "trace_get_info", 00:06:53.596 "trace_get_tpoint_group_mask", 00:06:53.596 "trace_disable_tpoint_group", 00:06:53.596 "trace_enable_tpoint_group", 00:06:53.596 "trace_clear_tpoint_mask", 00:06:53.596 "trace_set_tpoint_mask", 00:06:53.596 "notify_get_notifications", 00:06:53.596 "notify_get_types", 00:06:53.596 "spdk_get_version", 00:06:53.596 "rpc_get_methods" 00:06:53.596 ] 00:06:53.596 13:41:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.596 13:41:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:53.596 13:41:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57984 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57984 ']' 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57984 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57984 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.596 killing process with pid 57984 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57984' 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57984 00:06:53.596 13:41:03 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57984 00:06:54.162 00:06:54.162 real 0m2.054s 00:06:54.162 user 0m3.729s 00:06:54.162 sys 0m0.530s 00:06:54.162 13:41:04 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.162 13:41:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.162 ************************************ 00:06:54.162 END TEST spdkcli_tcp 00:06:54.162 ************************************ 00:06:54.162 13:41:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:54.162 13:41:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.162 13:41:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.162 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:06:54.162 ************************************ 00:06:54.162 START TEST dpdk_mem_utility 00:06:54.162 ************************************ 00:06:54.162 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:54.162 * Looking for test storage... 00:06:54.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:54.162 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.162 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.162 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.420 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.420 13:41:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:54.420 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.420 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.421 --rc genhtml_branch_coverage=1 00:06:54.421 --rc genhtml_function_coverage=1 00:06:54.421 --rc genhtml_legend=1 00:06:54.421 --rc geninfo_all_blocks=1 00:06:54.421 --rc geninfo_unexecuted_blocks=1 00:06:54.421 00:06:54.421 ' 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.421 --rc genhtml_branch_coverage=1 00:06:54.421 --rc genhtml_function_coverage=1 00:06:54.421 --rc genhtml_legend=1 00:06:54.421 --rc geninfo_all_blocks=1 00:06:54.421 --rc geninfo_unexecuted_blocks=1 00:06:54.421 00:06:54.421 ' 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.421 --rc genhtml_branch_coverage=1 00:06:54.421 --rc genhtml_function_coverage=1 00:06:54.421 --rc genhtml_legend=1 00:06:54.421 --rc geninfo_all_blocks=1 00:06:54.421 --rc geninfo_unexecuted_blocks=1 00:06:54.421 00:06:54.421 ' 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.421 --rc genhtml_branch_coverage=1 00:06:54.421 --rc genhtml_function_coverage=1 00:06:54.421 --rc genhtml_legend=1 00:06:54.421 --rc geninfo_all_blocks=1 00:06:54.421 --rc geninfo_unexecuted_blocks=1 00:06:54.421 00:06:54.421 ' 00:06:54.421 13:41:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:54.421 13:41:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58083 00:06:54.421 13:41:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:54.421 13:41:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58083 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58083 ']' 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.421 13:41:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:54.421 [2024-10-01 13:41:04.479266] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:54.421 [2024-10-01 13:41:04.480029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58083 ] 00:06:54.679 [2024-10-01 13:41:04.623886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.679 [2024-10-01 13:41:04.746712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.679 [2024-10-01 13:41:04.823615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.617 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.617 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:55.617 13:41:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:55.617 13:41:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:55.617 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.617 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:55.617 { 00:06:55.617 "filename": "/tmp/spdk_mem_dump.txt" 00:06:55.617 } 00:06:55.617 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.617 13:41:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:55.617 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:55.617 1 heaps totaling size 860.000000 MiB 00:06:55.617 size: 860.000000 MiB heap id: 0 00:06:55.617 end heaps---------- 00:06:55.617 9 mempools totaling size 642.649841 MiB 00:06:55.617 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:55.617 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:55.617 size: 92.545471 MiB name: bdev_io_58083 00:06:55.617 size: 51.011292 MiB name: evtpool_58083 00:06:55.617 size: 50.003479 MiB name: msgpool_58083 00:06:55.617 size: 36.509338 MiB name: fsdev_io_58083 00:06:55.617 size: 21.763794 MiB name: PDU_Pool 00:06:55.617 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:55.617 size: 0.026123 MiB name: Session_Pool 00:06:55.617 end mempools------- 00:06:55.617 6 memzones totaling size 4.142822 MiB 00:06:55.617 size: 1.000366 MiB name: RG_ring_0_58083 00:06:55.617 size: 1.000366 MiB name: RG_ring_1_58083 00:06:55.617 size: 1.000366 MiB name: RG_ring_4_58083 00:06:55.617 size: 1.000366 MiB name: RG_ring_5_58083 00:06:55.617 size: 0.125366 MiB name: RG_ring_2_58083 00:06:55.617 size: 0.015991 MiB name: RG_ring_3_58083 00:06:55.617 end memzones------- 00:06:55.617 13:41:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:55.617 heap id: 0 total size: 860.000000 MiB number of busy elements: 310 number of free elements: 16 00:06:55.617 list of free elements. size: 13.935974 MiB 00:06:55.617 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:55.617 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:55.617 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:55.617 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:55.617 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:55.617 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:55.617 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:55.617 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:55.617 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:55.617 element at address: 0x20001d800000 with size: 0.566956 MiB 00:06:55.617 element at address: 0x20000d800000 with size: 0.489807 MiB 00:06:55.617 element at address: 0x200003e00000 with size: 0.487915 MiB 00:06:55.617 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:55.617 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:55.617 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:55.617 element at address: 0x200003a00000 with size: 0.353210 MiB 00:06:55.617 list of standard malloc elements. size: 199.267334 MiB 00:06:55.617 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:55.617 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:55.617 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:55.617 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:55.617 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:55.617 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:55.617 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:55.617 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:55.617 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:55.617 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:55.617 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:55.618 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891240 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:55.618 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:55.619 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:55.619 list of memzone associated elements. size: 646.796692 MiB 00:06:55.619 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:55.619 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:55.619 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:55.619 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:55.619 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:55.619 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58083_0 00:06:55.619 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:55.619 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58083_0 00:06:55.619 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:55.619 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58083_0 00:06:55.619 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:55.619 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58083_0 00:06:55.619 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:55.619 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:55.619 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:55.619 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:55.619 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:55.619 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58083 00:06:55.619 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:55.619 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58083 00:06:55.619 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:55.619 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58083 00:06:55.619 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:55.619 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:55.619 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:55.619 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:55.619 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:55.619 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:55.619 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:55.619 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:55.619 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:55.619 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58083 00:06:55.619 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:55.619 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58083 00:06:55.619 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:55.619 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58083 00:06:55.619 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:55.619 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58083 00:06:55.619 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:55.619 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58083 00:06:55.619 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:55.619 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58083 00:06:55.619 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:55.619 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:55.619 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:55.619 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:55.619 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:55.619 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:55.619 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:06:55.619 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58083 00:06:55.619 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:55.619 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:55.619 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:55.619 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:55.619 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:06:55.619 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58083 00:06:55.619 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:55.619 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:55.619 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:55.619 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58083 00:06:55.619 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:55.619 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58083 00:06:55.619 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:06:55.619 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58083 00:06:55.619 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:55.619 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:55.619 13:41:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:55.619 13:41:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58083 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58083 ']' 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58083 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58083 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.619 killing process with pid 58083 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58083' 00:06:55.619 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58083 00:06:55.620 13:41:05 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58083 00:06:56.184 00:06:56.184 real 0m1.966s 00:06:56.184 user 0m2.154s 00:06:56.184 sys 0m0.509s 00:06:56.184 13:41:06 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.184 13:41:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 ************************************ 00:06:56.184 END TEST dpdk_mem_utility 00:06:56.185 ************************************ 00:06:56.185 13:41:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:56.185 13:41:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.185 13:41:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.185 13:41:06 -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 ************************************ 00:06:56.185 START TEST event 00:06:56.185 ************************************ 00:06:56.185 13:41:06 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:56.185 * Looking for test storage... 00:06:56.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:56.185 13:41:06 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:56.185 13:41:06 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:56.185 13:41:06 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:56.442 13:41:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.442 13:41:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.442 13:41:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.442 13:41:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.442 13:41:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.442 13:41:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.442 13:41:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.442 13:41:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.442 13:41:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.442 13:41:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.442 13:41:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.442 13:41:06 event -- scripts/common.sh@344 -- # case "$op" in 00:06:56.442 13:41:06 event -- scripts/common.sh@345 -- # : 1 00:06:56.442 13:41:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.442 13:41:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.442 13:41:06 event -- scripts/common.sh@365 -- # decimal 1 00:06:56.442 13:41:06 event -- scripts/common.sh@353 -- # local d=1 00:06:56.442 13:41:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.442 13:41:06 event -- scripts/common.sh@355 -- # echo 1 00:06:56.442 13:41:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.442 13:41:06 event -- scripts/common.sh@366 -- # decimal 2 00:06:56.442 13:41:06 event -- scripts/common.sh@353 -- # local d=2 00:06:56.442 13:41:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.442 13:41:06 event -- scripts/common.sh@355 -- # echo 2 00:06:56.442 13:41:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.442 13:41:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.442 13:41:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.442 13:41:06 event -- scripts/common.sh@368 -- # return 0 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 13:41:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.442 13:41:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.442 13:41:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:56.442 13:41:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.442 13:41:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.442 ************************************ 00:06:56.442 START TEST event_perf 00:06:56.442 ************************************ 00:06:56.442 13:41:06 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:56.442 Running I/O for 1 seconds...[2024-10-01 13:41:06.459129] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:56.442 [2024-10-01 13:41:06.459242] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58168 ] 00:06:56.442 [2024-10-01 13:41:06.596897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.699 [2024-10-01 13:41:06.728585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.699 [2024-10-01 13:41:06.728818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.699 [2024-10-01 13:41:06.729153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.699 Running I/O for 1 seconds...[2024-10-01 13:41:06.728987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.074 00:06:58.074 lcore 0: 106728 00:06:58.074 lcore 1: 106728 00:06:58.074 lcore 2: 106728 00:06:58.074 lcore 3: 106727 00:06:58.074 done. 00:06:58.074 00:06:58.074 real 0m1.431s 00:06:58.074 user 0m4.229s 00:06:58.074 sys 0m0.073s 00:06:58.074 13:41:07 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.074 13:41:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.074 ************************************ 00:06:58.074 END TEST event_perf 00:06:58.074 ************************************ 00:06:58.074 13:41:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:58.074 13:41:07 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:58.074 13:41:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.074 13:41:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.074 ************************************ 00:06:58.074 START TEST event_reactor 00:06:58.074 ************************************ 00:06:58.074 13:41:07 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:58.074 [2024-10-01 13:41:07.939743] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:58.074 [2024-10-01 13:41:07.939858] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58201 ] 00:06:58.074 [2024-10-01 13:41:08.075659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.074 [2024-10-01 13:41:08.243715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.445 test_start 00:06:59.446 oneshot 00:06:59.446 tick 100 00:06:59.446 tick 100 00:06:59.446 tick 250 00:06:59.446 tick 100 00:06:59.446 tick 100 00:06:59.446 tick 100 00:06:59.446 tick 250 00:06:59.446 tick 500 00:06:59.446 tick 100 00:06:59.446 tick 100 00:06:59.446 tick 250 00:06:59.446 tick 100 00:06:59.446 tick 100 00:06:59.446 test_end 00:06:59.446 00:06:59.446 real 0m1.454s 00:06:59.446 user 0m1.275s 00:06:59.446 sys 0m0.071s 00:06:59.446 13:41:09 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.446 ************************************ 00:06:59.446 END TEST event_reactor 00:06:59.446 13:41:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:59.446 ************************************ 00:06:59.446 13:41:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:59.446 13:41:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:59.446 13:41:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.446 13:41:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.446 ************************************ 00:06:59.446 START TEST event_reactor_perf 00:06:59.446 ************************************ 00:06:59.446 13:41:09 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:59.446 [2024-10-01 13:41:09.452820] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:59.446 [2024-10-01 13:41:09.452937] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58242 ] 00:06:59.446 [2024-10-01 13:41:09.589029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.703 [2024-10-01 13:41:09.743692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.655 test_start 00:07:00.655 test_end 00:07:00.655 Performance: 380282 events per second 00:07:00.655 00:07:00.655 real 0m1.402s 00:07:00.655 user 0m1.218s 00:07:00.655 sys 0m0.074s 00:07:00.656 13:41:10 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.656 ************************************ 00:07:00.656 END TEST event_reactor_perf 00:07:00.656 13:41:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.914 ************************************ 00:07:00.914 13:41:10 event -- event/event.sh@49 -- # uname -s 00:07:00.914 13:41:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:00.914 13:41:10 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:00.914 13:41:10 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.914 13:41:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.914 13:41:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.914 ************************************ 00:07:00.914 START TEST event_scheduler 00:07:00.914 ************************************ 00:07:00.914 13:41:10 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:00.914 * Looking for test storage... 00:07:00.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:00.914 13:41:10 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.914 13:41:10 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.914 13:41:10 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.914 13:41:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.914 --rc genhtml_branch_coverage=1 00:07:00.914 --rc genhtml_function_coverage=1 00:07:00.914 --rc genhtml_legend=1 00:07:00.914 --rc geninfo_all_blocks=1 00:07:00.914 --rc geninfo_unexecuted_blocks=1 00:07:00.914 00:07:00.914 ' 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.914 --rc genhtml_branch_coverage=1 00:07:00.914 --rc genhtml_function_coverage=1 00:07:00.914 --rc genhtml_legend=1 00:07:00.914 --rc geninfo_all_blocks=1 00:07:00.914 --rc geninfo_unexecuted_blocks=1 00:07:00.914 00:07:00.914 ' 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.914 --rc genhtml_branch_coverage=1 00:07:00.914 --rc genhtml_function_coverage=1 00:07:00.914 --rc genhtml_legend=1 00:07:00.914 --rc geninfo_all_blocks=1 00:07:00.914 --rc geninfo_unexecuted_blocks=1 00:07:00.914 00:07:00.914 ' 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.914 --rc genhtml_branch_coverage=1 00:07:00.914 --rc genhtml_function_coverage=1 00:07:00.914 --rc genhtml_legend=1 00:07:00.914 --rc geninfo_all_blocks=1 00:07:00.914 --rc geninfo_unexecuted_blocks=1 00:07:00.914 00:07:00.914 ' 00:07:00.914 13:41:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:00.914 13:41:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58306 00:07:00.914 13:41:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:00.914 13:41:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.914 13:41:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58306 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58306 ']' 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.914 13:41:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.172 [2024-10-01 13:41:11.136806] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:01.172 [2024-10-01 13:41:11.137174] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58306 ] 00:07:01.172 [2024-10-01 13:41:11.272742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.429 [2024-10-01 13:41:11.393097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.429 [2024-10-01 13:41:11.393231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.429 [2024-10-01 13:41:11.393362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.429 [2024-10-01 13:41:11.393362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:02.360 13:41:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:02.360 POWER: Cannot set governor of lcore 0 to userspace 00:07:02.360 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:02.360 POWER: Cannot set governor of lcore 0 to performance 00:07:02.360 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:02.360 POWER: Cannot set governor of lcore 0 to userspace 00:07:02.360 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:02.360 POWER: Cannot set governor of lcore 0 to userspace 00:07:02.360 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:02.360 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:02.360 POWER: Unable to set Power Management Environment for lcore 0 00:07:02.360 [2024-10-01 13:41:12.204108] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:02.360 [2024-10-01 13:41:12.204123] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:02.360 [2024-10-01 13:41:12.204132] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:02.360 [2024-10-01 13:41:12.204144] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:02.360 [2024-10-01 13:41:12.204151] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:02.360 [2024-10-01 13:41:12.204159] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 [2024-10-01 13:41:12.265395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.360 [2024-10-01 13:41:12.301217] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 ************************************ 00:07:02.360 START TEST scheduler_create_thread 00:07:02.360 ************************************ 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 2 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 3 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 4 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 5 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 6 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 7 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 8 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.360 9 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.360 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.923 10 00:07:02.923 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.923 13:41:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:02.923 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.923 13:41:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.293 13:41:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.293 13:41:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:04.293 13:41:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:04.293 13:41:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.293 13:41:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.859 13:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.859 13:41:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:04.859 13:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.859 13:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.793 13:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.793 13:41:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:05.793 13:41:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:05.793 13:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.793 13:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.361 ************************************ 00:07:06.361 END TEST scheduler_create_thread 00:07:06.361 ************************************ 00:07:06.361 13:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.361 00:07:06.361 real 0m4.212s 00:07:06.361 user 0m0.021s 00:07:06.361 sys 0m0.003s 00:07:06.361 13:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.361 13:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 13:41:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:06.627 13:41:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58306 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58306 ']' 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58306 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58306 00:07:06.627 killing process with pid 58306 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58306' 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58306 00:07:06.627 13:41:16 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58306 00:07:06.627 [2024-10-01 13:41:16.802867] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:07.193 ************************************ 00:07:07.193 END TEST event_scheduler 00:07:07.193 ************************************ 00:07:07.193 00:07:07.193 real 0m6.210s 00:07:07.193 user 0m13.899s 00:07:07.193 sys 0m0.409s 00:07:07.193 13:41:17 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.193 13:41:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.193 13:41:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:07.193 13:41:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:07.193 13:41:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.193 13:41:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.193 13:41:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.193 ************************************ 00:07:07.193 START TEST app_repeat 00:07:07.193 ************************************ 00:07:07.193 13:41:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:07.193 Process app_repeat pid: 58422 00:07:07.193 spdk_app_start Round 0 00:07:07.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58422 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58422' 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:07.193 13:41:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58422 /var/tmp/spdk-nbd.sock 00:07:07.193 13:41:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58422 ']' 00:07:07.193 13:41:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.193 13:41:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.193 13:41:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.193 13:41:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.193 13:41:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.193 [2024-10-01 13:41:17.180883] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:07.193 [2024-10-01 13:41:17.181000] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58422 ] 00:07:07.193 [2024-10-01 13:41:17.315376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.451 [2024-10-01 13:41:17.440206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.451 [2024-10-01 13:41:17.440217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.451 [2024-10-01 13:41:17.495289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.385 13:41:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.385 13:41:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:08.385 13:41:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.385 Malloc0 00:07:08.643 13:41:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.903 Malloc1 00:07:08.903 13:41:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.903 13:41:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:09.161 /dev/nbd0 00:07:09.421 13:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.421 13:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.421 1+0 records in 00:07:09.421 1+0 records out 00:07:09.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506731 s, 8.1 MB/s 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.421 13:41:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:09.421 13:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.421 13:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.421 13:41:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:09.680 /dev/nbd1 00:07:09.680 13:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.680 13:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.680 1+0 records in 00:07:09.680 1+0 records out 00:07:09.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279546 s, 14.7 MB/s 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.680 13:41:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:09.680 13:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.680 13:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.680 13:41:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.680 13:41:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.680 13:41:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.938 { 00:07:09.938 "nbd_device": "/dev/nbd0", 00:07:09.938 "bdev_name": "Malloc0" 00:07:09.938 }, 00:07:09.938 { 00:07:09.938 "nbd_device": "/dev/nbd1", 00:07:09.938 "bdev_name": "Malloc1" 00:07:09.938 } 00:07:09.938 ]' 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.938 { 00:07:09.938 "nbd_device": "/dev/nbd0", 00:07:09.938 "bdev_name": "Malloc0" 00:07:09.938 }, 00:07:09.938 { 00:07:09.938 "nbd_device": "/dev/nbd1", 00:07:09.938 "bdev_name": "Malloc1" 00:07:09.938 } 00:07:09.938 ]' 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.938 /dev/nbd1' 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.938 /dev/nbd1' 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:09.938 256+0 records in 00:07:09.938 256+0 records out 00:07:09.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634303 s, 165 MB/s 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.938 13:41:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.196 256+0 records in 00:07:10.196 256+0 records out 00:07:10.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250551 s, 41.9 MB/s 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.196 256+0 records in 00:07:10.196 256+0 records out 00:07:10.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313504 s, 33.4 MB/s 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.196 13:41:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.455 13:41:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.713 13:41:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:10.972 13:41:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:10.972 13:41:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:11.537 13:41:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.537 [2024-10-01 13:41:21.611909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.794 [2024-10-01 13:41:21.719957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.794 [2024-10-01 13:41:21.719963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.794 [2024-10-01 13:41:21.774418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.794 [2024-10-01 13:41:21.774511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.795 [2024-10-01 13:41:21.774527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:14.325 spdk_app_start Round 1 00:07:14.325 13:41:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.325 13:41:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:14.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.325 13:41:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58422 /var/tmp/spdk-nbd.sock 00:07:14.325 13:41:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58422 ']' 00:07:14.325 13:41:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.325 13:41:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.325 13:41:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.325 13:41:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.325 13:41:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.892 13:41:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.892 13:41:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:14.892 13:41:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.892 Malloc0 00:07:14.892 13:41:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.457 Malloc1 00:07:15.457 13:41:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.457 13:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.715 /dev/nbd0 00:07:15.715 13:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.715 13:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.715 1+0 records in 00:07:15.715 1+0 records out 00:07:15.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482991 s, 8.5 MB/s 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.715 13:41:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:15.715 13:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.715 13:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.715 13:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:15.973 /dev/nbd1 00:07:15.974 13:41:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.974 13:41:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.974 1+0 records in 00:07:15.974 1+0 records out 00:07:15.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311254 s, 13.2 MB/s 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.974 13:41:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:15.974 13:41:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.974 13:41:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.974 13:41:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.974 13:41:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.974 13:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.232 { 00:07:16.232 "nbd_device": "/dev/nbd0", 00:07:16.232 "bdev_name": "Malloc0" 00:07:16.232 }, 00:07:16.232 { 00:07:16.232 "nbd_device": "/dev/nbd1", 00:07:16.232 "bdev_name": "Malloc1" 00:07:16.232 } 00:07:16.232 ]' 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.232 { 00:07:16.232 "nbd_device": "/dev/nbd0", 00:07:16.232 "bdev_name": "Malloc0" 00:07:16.232 }, 00:07:16.232 { 00:07:16.232 "nbd_device": "/dev/nbd1", 00:07:16.232 "bdev_name": "Malloc1" 00:07:16.232 } 00:07:16.232 ]' 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.232 /dev/nbd1' 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.232 /dev/nbd1' 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.232 256+0 records in 00:07:16.232 256+0 records out 00:07:16.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00728967 s, 144 MB/s 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.232 13:41:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.495 256+0 records in 00:07:16.495 256+0 records out 00:07:16.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256712 s, 40.8 MB/s 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.495 256+0 records in 00:07:16.495 256+0 records out 00:07:16.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313031 s, 33.5 MB/s 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.495 13:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.753 13:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.011 13:41:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.269 13:41:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.269 13:41:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.528 13:41:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:18.093 [2024-10-01 13:41:28.016060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.093 [2024-10-01 13:41:28.216798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.093 [2024-10-01 13:41:28.216842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.351 [2024-10-01 13:41:28.302397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.351 [2024-10-01 13:41:28.302559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:18.351 [2024-10-01 13:41:28.302577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:20.891 spdk_app_start Round 2 00:07:20.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.891 13:41:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:20.891 13:41:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:20.891 13:41:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58422 /var/tmp/spdk-nbd.sock 00:07:20.891 13:41:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58422 ']' 00:07:20.891 13:41:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.891 13:41:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.891 13:41:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.891 13:41:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.891 13:41:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.891 13:41:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.891 13:41:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:20.891 13:41:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.149 Malloc0 00:07:21.149 13:41:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.714 Malloc1 00:07:21.714 13:41:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.714 13:41:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.714 13:41:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.714 13:41:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.714 13:41:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.714 13:41:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.714 13:41:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.715 13:41:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:21.972 /dev/nbd0 00:07:21.972 13:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.972 13:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.972 1+0 records in 00:07:21.972 1+0 records out 00:07:21.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349828 s, 11.7 MB/s 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.972 13:41:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:21.972 13:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.972 13:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.972 13:41:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:22.230 /dev/nbd1 00:07:22.230 13:41:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.230 13:41:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.230 1+0 records in 00:07:22.230 1+0 records out 00:07:22.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363585 s, 11.3 MB/s 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:22.230 13:41:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:22.230 13:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.230 13:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.230 13:41:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.230 13:41:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.230 13:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.489 { 00:07:22.489 "nbd_device": "/dev/nbd0", 00:07:22.489 "bdev_name": "Malloc0" 00:07:22.489 }, 00:07:22.489 { 00:07:22.489 "nbd_device": "/dev/nbd1", 00:07:22.489 "bdev_name": "Malloc1" 00:07:22.489 } 00:07:22.489 ]' 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.489 { 00:07:22.489 "nbd_device": "/dev/nbd0", 00:07:22.489 "bdev_name": "Malloc0" 00:07:22.489 }, 00:07:22.489 { 00:07:22.489 "nbd_device": "/dev/nbd1", 00:07:22.489 "bdev_name": "Malloc1" 00:07:22.489 } 00:07:22.489 ]' 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.489 /dev/nbd1' 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.489 /dev/nbd1' 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:22.489 256+0 records in 00:07:22.489 256+0 records out 00:07:22.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00728209 s, 144 MB/s 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.489 256+0 records in 00:07:22.489 256+0 records out 00:07:22.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289614 s, 36.2 MB/s 00:07:22.489 13:41:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.490 13:41:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.747 256+0 records in 00:07:22.747 256+0 records out 00:07:22.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325577 s, 32.2 MB/s 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.747 13:41:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:22.748 13:41:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.748 13:41:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.005 13:41:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.263 13:41:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:23.520 13:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.521 13:41:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.521 13:41:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.521 13:41:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.521 13:41:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.521 13:41:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:24.086 13:41:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:24.652 [2024-10-01 13:41:34.541099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.652 [2024-10-01 13:41:34.698136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.652 [2024-10-01 13:41:34.698156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.652 [2024-10-01 13:41:34.781539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.652 [2024-10-01 13:41:34.781661] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:24.652 [2024-10-01 13:41:34.781679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:27.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:27.181 13:41:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58422 /var/tmp/spdk-nbd.sock 00:07:27.181 13:41:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58422 ']' 00:07:27.181 13:41:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:27.181 13:41:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.181 13:41:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:27.181 13:41:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.181 13:41:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:27.439 13:41:37 event.app_repeat -- event/event.sh@39 -- # killprocess 58422 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58422 ']' 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58422 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58422 00:07:27.439 killing process with pid 58422 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58422' 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58422 00:07:27.439 13:41:37 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58422 00:07:27.698 spdk_app_start is called in Round 0. 00:07:27.698 Shutdown signal received, stop current app iteration 00:07:27.698 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:07:27.698 spdk_app_start is called in Round 1. 00:07:27.698 Shutdown signal received, stop current app iteration 00:07:27.698 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:07:27.698 spdk_app_start is called in Round 2. 00:07:27.698 Shutdown signal received, stop current app iteration 00:07:27.698 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:07:27.698 spdk_app_start is called in Round 3. 00:07:27.698 Shutdown signal received, stop current app iteration 00:07:27.698 13:41:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:27.698 13:41:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:27.698 ************************************ 00:07:27.698 END TEST app_repeat 00:07:27.698 ************************************ 00:07:27.698 00:07:27.698 real 0m20.612s 00:07:27.698 user 0m46.432s 00:07:27.698 sys 0m3.271s 00:07:27.698 13:41:37 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.698 13:41:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.698 13:41:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:27.698 13:41:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:27.698 13:41:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.698 13:41:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.698 13:41:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.698 ************************************ 00:07:27.698 START TEST cpu_locks 00:07:27.698 ************************************ 00:07:27.698 13:41:37 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:27.956 * Looking for test storage... 00:07:27.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:27.956 13:41:37 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:27.956 13:41:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:27.956 13:41:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:27.956 13:41:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.956 13:41:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:27.956 13:41:37 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.956 13:41:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:27.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.956 --rc genhtml_branch_coverage=1 00:07:27.956 --rc genhtml_function_coverage=1 00:07:27.956 --rc genhtml_legend=1 00:07:27.956 --rc geninfo_all_blocks=1 00:07:27.956 --rc geninfo_unexecuted_blocks=1 00:07:27.956 00:07:27.956 ' 00:07:27.956 13:41:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:27.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.956 --rc genhtml_branch_coverage=1 00:07:27.956 --rc genhtml_function_coverage=1 00:07:27.956 --rc genhtml_legend=1 00:07:27.956 --rc geninfo_all_blocks=1 00:07:27.956 --rc geninfo_unexecuted_blocks=1 00:07:27.956 00:07:27.956 ' 00:07:27.957 13:41:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:27.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.957 --rc genhtml_branch_coverage=1 00:07:27.957 --rc genhtml_function_coverage=1 00:07:27.957 --rc genhtml_legend=1 00:07:27.957 --rc geninfo_all_blocks=1 00:07:27.957 --rc geninfo_unexecuted_blocks=1 00:07:27.957 00:07:27.957 ' 00:07:27.957 13:41:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:27.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.957 --rc genhtml_branch_coverage=1 00:07:27.957 --rc genhtml_function_coverage=1 00:07:27.957 --rc genhtml_legend=1 00:07:27.957 --rc geninfo_all_blocks=1 00:07:27.957 --rc geninfo_unexecuted_blocks=1 00:07:27.957 00:07:27.957 ' 00:07:27.957 13:41:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:27.957 13:41:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:27.957 13:41:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:27.957 13:41:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:27.957 13:41:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.957 13:41:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.957 13:41:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.957 ************************************ 00:07:27.957 START TEST default_locks 00:07:27.957 ************************************ 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58885 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58885 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58885 ']' 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.957 13:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.957 [2024-10-01 13:41:38.060500] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:27.957 [2024-10-01 13:41:38.061486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58885 ] 00:07:28.215 [2024-10-01 13:41:38.201143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.215 [2024-10-01 13:41:38.327782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.473 [2024-10-01 13:41:38.401013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.473 13:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.473 13:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:28.473 13:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58885 00:07:28.473 13:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.473 13:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58885 00:07:29.041 13:41:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58885 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58885 ']' 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58885 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58885 00:07:29.042 killing process with pid 58885 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58885' 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58885 00:07:29.042 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58885 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58885 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58885 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58885 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58885 ']' 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.611 ERROR: process (pid: 58885) is no longer running 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.611 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58885) - No such process 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:29.611 00:07:29.611 real 0m1.645s 00:07:29.611 user 0m1.645s 00:07:29.611 sys 0m0.665s 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.611 13:41:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.611 ************************************ 00:07:29.611 END TEST default_locks 00:07:29.611 ************************************ 00:07:29.611 13:41:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:29.611 13:41:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.611 13:41:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.611 13:41:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.611 ************************************ 00:07:29.611 START TEST default_locks_via_rpc 00:07:29.611 ************************************ 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58929 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58929 00:07:29.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58929 ']' 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.611 13:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.611 [2024-10-01 13:41:39.743716] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:29.611 [2024-10-01 13:41:39.743842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58929 ] 00:07:29.869 [2024-10-01 13:41:39.875169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.869 [2024-10-01 13:41:39.995120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.126 [2024-10-01 13:41:40.072514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58929 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.690 13:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58929 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58929 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58929 ']' 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58929 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58929 00:07:31.254 killing process with pid 58929 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58929' 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58929 00:07:31.254 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58929 00:07:31.822 00:07:31.822 real 0m2.080s 00:07:31.822 user 0m2.330s 00:07:31.822 sys 0m0.582s 00:07:31.822 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.822 ************************************ 00:07:31.822 END TEST default_locks_via_rpc 00:07:31.822 ************************************ 00:07:31.822 13:41:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.822 13:41:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:31.822 13:41:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.822 13:41:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.822 13:41:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.822 ************************************ 00:07:31.822 START TEST non_locking_app_on_locked_coremask 00:07:31.822 ************************************ 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58980 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58980 /var/tmp/spdk.sock 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58980 ']' 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.822 13:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.822 [2024-10-01 13:41:41.863612] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:31.822 [2024-10-01 13:41:41.863713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:07:31.822 [2024-10-01 13:41:41.995907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.080 [2024-10-01 13:41:42.131846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.080 [2024-10-01 13:41:42.206264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59002 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59002 /var/tmp/spdk2.sock 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59002 ']' 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.013 13:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.013 [2024-10-01 13:41:43.051070] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:33.013 [2024-10-01 13:41:43.051375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:07:33.270 [2024-10-01 13:41:43.193354] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.270 [2024-10-01 13:41:43.193415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.270 [2024-10-01 13:41:43.442347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.529 [2024-10-01 13:41:43.601008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.464 13:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.464 13:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:34.464 13:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58980 00:07:34.464 13:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58980 00:07:34.464 13:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58980 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58980 ']' 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58980 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58980 00:07:35.396 killing process with pid 58980 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58980' 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58980 00:07:35.396 13:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58980 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59002 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59002 ']' 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59002 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59002 00:07:35.961 killing process with pid 59002 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59002' 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59002 00:07:35.961 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59002 00:07:36.526 00:07:36.526 real 0m4.741s 00:07:36.526 user 0m5.564s 00:07:36.526 sys 0m1.253s 00:07:36.526 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.526 13:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.526 ************************************ 00:07:36.526 END TEST non_locking_app_on_locked_coremask 00:07:36.526 ************************************ 00:07:36.526 13:41:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:36.526 13:41:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.526 13:41:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.526 13:41:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.526 ************************************ 00:07:36.526 START TEST locking_app_on_unlocked_coremask 00:07:36.526 ************************************ 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:36.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59069 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59069 /var/tmp/spdk.sock 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59069 ']' 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.526 13:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.526 [2024-10-01 13:41:46.654648] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:36.526 [2024-10-01 13:41:46.655243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:07:36.783 [2024-10-01 13:41:46.789263] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.783 [2024-10-01 13:41:46.789593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.783 [2024-10-01 13:41:46.918597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.041 [2024-10-01 13:41:46.995592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59083 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59083 /var/tmp/spdk2.sock 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59083 ']' 00:07:37.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.041 13:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.299 [2024-10-01 13:41:47.257404] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:37.299 [2024-10-01 13:41:47.257736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59083 ] 00:07:37.299 [2024-10-01 13:41:47.400213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.557 [2024-10-01 13:41:47.657001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.815 [2024-10-01 13:41:47.809522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.381 13:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.381 13:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:38.381 13:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59083 00:07:38.381 13:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.381 13:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59083 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59069 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59069 ']' 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59069 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59069 00:07:39.315 killing process with pid 59069 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59069' 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59069 00:07:39.315 13:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59069 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59083 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59083 ']' 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59083 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59083 00:07:40.248 killing process with pid 59083 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59083' 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59083 00:07:40.248 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59083 00:07:40.506 00:07:40.506 real 0m4.027s 00:07:40.506 user 0m4.486s 00:07:40.506 sys 0m1.164s 00:07:40.506 ************************************ 00:07:40.506 END TEST locking_app_on_unlocked_coremask 00:07:40.506 ************************************ 00:07:40.506 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.506 13:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.507 13:41:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:40.507 13:41:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.507 13:41:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.507 13:41:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.507 ************************************ 00:07:40.507 START TEST locking_app_on_locked_coremask 00:07:40.507 ************************************ 00:07:40.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59150 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59150 /var/tmp/spdk.sock 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59150 ']' 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.507 13:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.765 [2024-10-01 13:41:50.720638] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:40.765 [2024-10-01 13:41:50.720755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59150 ] 00:07:40.765 [2024-10-01 13:41:50.851683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.023 [2024-10-01 13:41:50.971823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.023 [2024-10-01 13:41:51.046482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59166 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59166 /var/tmp/spdk2.sock 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59166 /var/tmp/spdk2.sock 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59166 /var/tmp/spdk2.sock 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59166 ']' 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.616 13:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.889 [2024-10-01 13:41:51.800446] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:41.889 [2024-10-01 13:41:51.800784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59166 ] 00:07:41.889 [2024-10-01 13:41:51.943482] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59150 has claimed it. 00:07:41.889 [2024-10-01 13:41:51.943577] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:42.456 ERROR: process (pid: 59166) is no longer running 00:07:42.456 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59166) - No such process 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59150 00:07:42.456 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59150 00:07:42.457 13:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.036 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59150 00:07:43.036 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59150 ']' 00:07:43.036 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59150 00:07:43.036 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.036 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.036 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59150 00:07:43.037 killing process with pid 59150 00:07:43.037 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.037 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.037 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59150' 00:07:43.037 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59150 00:07:43.037 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59150 00:07:43.295 00:07:43.295 real 0m2.799s 00:07:43.295 user 0m3.304s 00:07:43.295 sys 0m0.651s 00:07:43.295 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.295 13:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.295 ************************************ 00:07:43.295 END TEST locking_app_on_locked_coremask 00:07:43.295 ************************************ 00:07:43.552 13:41:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:43.552 13:41:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.552 13:41:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.552 13:41:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.552 ************************************ 00:07:43.552 START TEST locking_overlapped_coremask 00:07:43.552 ************************************ 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59217 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59217 /var/tmp/spdk.sock 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59217 ']' 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.552 13:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.552 [2024-10-01 13:41:53.603279] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:43.552 [2024-10-01 13:41:53.603424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:07:43.809 [2024-10-01 13:41:53.742185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.809 [2024-10-01 13:41:53.916090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.809 [2024-10-01 13:41:53.916197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.809 [2024-10-01 13:41:53.916220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.066 [2024-10-01 13:41:54.002773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59235 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59235 /var/tmp/spdk2.sock 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59235 /var/tmp/spdk2.sock 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59235 /var/tmp/spdk2.sock 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59235 ']' 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.632 13:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.632 [2024-10-01 13:41:54.710145] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:44.633 [2024-10-01 13:41:54.710556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59235 ] 00:07:44.890 [2024-10-01 13:41:54.863376] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59217 has claimed it. 00:07:44.890 [2024-10-01 13:41:54.863469] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:45.456 ERROR: process (pid: 59235) is no longer running 00:07:45.456 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59235) - No such process 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59217 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59217 ']' 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59217 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59217 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59217' 00:07:45.456 killing process with pid 59217 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59217 00:07:45.456 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59217 00:07:46.023 00:07:46.023 real 0m2.414s 00:07:46.023 user 0m6.608s 00:07:46.023 sys 0m0.492s 00:07:46.023 ************************************ 00:07:46.023 END TEST locking_overlapped_coremask 00:07:46.023 ************************************ 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.024 13:41:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:46.024 13:41:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.024 13:41:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.024 13:41:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.024 ************************************ 00:07:46.024 START TEST locking_overlapped_coremask_via_rpc 00:07:46.024 ************************************ 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59275 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59275 /var/tmp/spdk.sock 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:46.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59275 ']' 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.024 13:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.024 [2024-10-01 13:41:56.064563] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:46.024 [2024-10-01 13:41:56.064704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59275 ] 00:07:46.282 [2024-10-01 13:41:56.204116] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:46.282 [2024-10-01 13:41:56.204494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.282 [2024-10-01 13:41:56.332697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.282 [2024-10-01 13:41:56.332780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.282 [2024-10-01 13:41:56.332771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.282 [2024-10-01 13:41:56.413875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59304 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59304 /var/tmp/spdk2.sock 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59304 ']' 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.216 13:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.216 [2024-10-01 13:41:57.313580] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:47.216 [2024-10-01 13:41:57.314058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59304 ] 00:07:47.474 [2024-10-01 13:41:57.470461] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.474 [2024-10-01 13:41:57.470533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.733 [2024-10-01 13:41:57.719368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.733 [2024-10-01 13:41:57.719553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.733 [2024-10-01 13:41:57.719563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.733 [2024-10-01 13:41:57.872167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.300 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.300 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.300 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:48.300 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.300 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.300 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.300 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.301 [2024-10-01 13:41:58.399088] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59275 has claimed it. 00:07:48.301 request: 00:07:48.301 { 00:07:48.301 "method": "framework_enable_cpumask_locks", 00:07:48.301 "req_id": 1 00:07:48.301 } 00:07:48.301 Got JSON-RPC error response 00:07:48.301 response: 00:07:48.301 { 00:07:48.301 "code": -32603, 00:07:48.301 "message": "Failed to claim CPU core: 2" 00:07:48.301 } 00:07:48.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59275 /var/tmp/spdk.sock 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59275 ']' 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.301 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.558 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.558 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.558 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59304 /var/tmp/spdk2.sock 00:07:48.558 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59304 ']' 00:07:48.559 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.559 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.559 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.559 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.559 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.816 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.816 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.816 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:48.816 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:49.074 ************************************ 00:07:49.074 END TEST locking_overlapped_coremask_via_rpc 00:07:49.074 ************************************ 00:07:49.074 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:49.074 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:49.074 00:07:49.074 real 0m3.007s 00:07:49.074 user 0m1.717s 00:07:49.074 sys 0m0.210s 00:07:49.074 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.074 13:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.074 13:41:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:49.074 13:41:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59275 ]] 00:07:49.074 13:41:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59275 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59275 ']' 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59275 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59275 00:07:49.074 killing process with pid 59275 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.074 13:41:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59275' 00:07:49.075 13:41:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59275 00:07:49.075 13:41:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59275 00:07:49.332 13:41:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59304 ]] 00:07:49.332 13:41:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59304 00:07:49.332 13:41:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59304 ']' 00:07:49.332 13:41:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59304 00:07:49.332 13:41:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:49.332 13:41:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.332 13:41:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59304 00:07:49.589 killing process with pid 59304 00:07:49.589 13:41:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:49.589 13:41:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:49.589 13:41:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59304' 00:07:49.589 13:41:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59304 00:07:49.589 13:41:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59304 00:07:49.850 13:41:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:49.850 13:41:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:49.850 13:41:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59275 ]] 00:07:49.850 13:41:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59275 00:07:49.850 13:41:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59275 ']' 00:07:49.850 Process with pid 59275 is not found 00:07:49.850 Process with pid 59304 is not found 00:07:49.850 13:41:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59275 00:07:49.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59275) - No such process 00:07:49.850 13:41:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59275 is not found' 00:07:49.850 13:41:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59304 ]] 00:07:49.850 13:41:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59304 00:07:49.850 13:41:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59304 ']' 00:07:49.850 13:41:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59304 00:07:49.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59304) - No such process 00:07:49.850 13:41:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59304 is not found' 00:07:49.851 13:41:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:49.851 00:07:49.851 real 0m22.170s 00:07:49.851 user 0m39.674s 00:07:49.851 sys 0m5.917s 00:07:49.851 13:41:59 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.851 ************************************ 00:07:49.851 END TEST cpu_locks 00:07:49.851 ************************************ 00:07:49.851 13:41:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.851 ************************************ 00:07:49.851 END TEST event 00:07:49.851 ************************************ 00:07:49.851 00:07:49.851 real 0m53.795s 00:07:49.851 user 1m46.962s 00:07:49.851 sys 0m10.083s 00:07:49.851 13:42:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.851 13:42:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.110 13:42:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:50.110 13:42:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.110 13:42:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.110 13:42:00 -- common/autotest_common.sh@10 -- # set +x 00:07:50.110 ************************************ 00:07:50.110 START TEST thread 00:07:50.110 ************************************ 00:07:50.110 13:42:00 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:50.110 * Looking for test storage... 00:07:50.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:50.110 13:42:00 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:50.110 13:42:00 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:50.110 13:42:00 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:50.110 13:42:00 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:50.110 13:42:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.110 13:42:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.110 13:42:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.110 13:42:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.110 13:42:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.110 13:42:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.110 13:42:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.111 13:42:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.111 13:42:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.111 13:42:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.111 13:42:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.111 13:42:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:50.111 13:42:00 thread -- scripts/common.sh@345 -- # : 1 00:07:50.111 13:42:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.111 13:42:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.111 13:42:00 thread -- scripts/common.sh@365 -- # decimal 1 00:07:50.111 13:42:00 thread -- scripts/common.sh@353 -- # local d=1 00:07:50.111 13:42:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.111 13:42:00 thread -- scripts/common.sh@355 -- # echo 1 00:07:50.111 13:42:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.111 13:42:00 thread -- scripts/common.sh@366 -- # decimal 2 00:07:50.111 13:42:00 thread -- scripts/common.sh@353 -- # local d=2 00:07:50.111 13:42:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.111 13:42:00 thread -- scripts/common.sh@355 -- # echo 2 00:07:50.111 13:42:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.111 13:42:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.111 13:42:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.111 13:42:00 thread -- scripts/common.sh@368 -- # return 0 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.111 --rc genhtml_branch_coverage=1 00:07:50.111 --rc genhtml_function_coverage=1 00:07:50.111 --rc genhtml_legend=1 00:07:50.111 --rc geninfo_all_blocks=1 00:07:50.111 --rc geninfo_unexecuted_blocks=1 00:07:50.111 00:07:50.111 ' 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.111 --rc genhtml_branch_coverage=1 00:07:50.111 --rc genhtml_function_coverage=1 00:07:50.111 --rc genhtml_legend=1 00:07:50.111 --rc geninfo_all_blocks=1 00:07:50.111 --rc geninfo_unexecuted_blocks=1 00:07:50.111 00:07:50.111 ' 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.111 --rc genhtml_branch_coverage=1 00:07:50.111 --rc genhtml_function_coverage=1 00:07:50.111 --rc genhtml_legend=1 00:07:50.111 --rc geninfo_all_blocks=1 00:07:50.111 --rc geninfo_unexecuted_blocks=1 00:07:50.111 00:07:50.111 ' 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.111 --rc genhtml_branch_coverage=1 00:07:50.111 --rc genhtml_function_coverage=1 00:07:50.111 --rc genhtml_legend=1 00:07:50.111 --rc geninfo_all_blocks=1 00:07:50.111 --rc geninfo_unexecuted_blocks=1 00:07:50.111 00:07:50.111 ' 00:07:50.111 13:42:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.111 13:42:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.111 ************************************ 00:07:50.111 START TEST thread_poller_perf 00:07:50.111 ************************************ 00:07:50.111 13:42:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.374 [2024-10-01 13:42:00.294322] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:50.374 [2024-10-01 13:42:00.294641] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59439 ] 00:07:50.374 [2024-10-01 13:42:00.427197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.374 [2024-10-01 13:42:00.549022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.374 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:51.745 ====================================== 00:07:51.745 busy:2212610283 (cyc) 00:07:51.745 total_run_count: 318000 00:07:51.745 tsc_hz: 2200000000 (cyc) 00:07:51.745 ====================================== 00:07:51.745 poller_cost: 6957 (cyc), 3162 (nsec) 00:07:51.745 00:07:51.745 real 0m1.372s 00:07:51.745 ************************************ 00:07:51.745 END TEST thread_poller_perf 00:07:51.745 ************************************ 00:07:51.745 user 0m1.207s 00:07:51.745 sys 0m0.056s 00:07:51.745 13:42:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.745 13:42:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:51.745 13:42:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:51.745 13:42:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:51.745 13:42:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.745 13:42:01 thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.745 ************************************ 00:07:51.745 START TEST thread_poller_perf 00:07:51.745 ************************************ 00:07:51.745 13:42:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:51.745 [2024-10-01 13:42:01.714323] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:51.746 [2024-10-01 13:42:01.714672] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59470 ] 00:07:51.746 [2024-10-01 13:42:01.847853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.004 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:52.004 [2024-10-01 13:42:01.972527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.938 ====================================== 00:07:52.938 busy:2202056560 (cyc) 00:07:52.938 total_run_count: 4191000 00:07:52.938 tsc_hz: 2200000000 (cyc) 00:07:52.938 ====================================== 00:07:52.938 poller_cost: 525 (cyc), 238 (nsec) 00:07:52.938 00:07:52.938 real 0m1.370s 00:07:52.938 user 0m1.201s 00:07:52.938 sys 0m0.060s 00:07:52.938 ************************************ 00:07:52.938 END TEST thread_poller_perf 00:07:52.938 ************************************ 00:07:52.938 13:42:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.938 13:42:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.938 13:42:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:52.938 00:07:52.938 real 0m3.033s 00:07:52.938 user 0m2.554s 00:07:52.938 sys 0m0.266s 00:07:52.938 ************************************ 00:07:52.938 END TEST thread 00:07:52.938 ************************************ 00:07:52.938 13:42:03 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.938 13:42:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.197 13:42:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:53.197 13:42:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:53.197 13:42:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.197 13:42:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.197 13:42:03 -- common/autotest_common.sh@10 -- # set +x 00:07:53.197 ************************************ 00:07:53.197 START TEST app_cmdline 00:07:53.197 ************************************ 00:07:53.197 13:42:03 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:53.197 * Looking for test storage... 00:07:53.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.197 13:42:03 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:53.197 13:42:03 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:53.197 13:42:03 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:53.197 13:42:03 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.197 13:42:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.198 13:42:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.198 --rc genhtml_branch_coverage=1 00:07:53.198 --rc genhtml_function_coverage=1 00:07:53.198 --rc genhtml_legend=1 00:07:53.198 --rc geninfo_all_blocks=1 00:07:53.198 --rc geninfo_unexecuted_blocks=1 00:07:53.198 00:07:53.198 ' 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.198 --rc genhtml_branch_coverage=1 00:07:53.198 --rc genhtml_function_coverage=1 00:07:53.198 --rc genhtml_legend=1 00:07:53.198 --rc geninfo_all_blocks=1 00:07:53.198 --rc geninfo_unexecuted_blocks=1 00:07:53.198 00:07:53.198 ' 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.198 --rc genhtml_branch_coverage=1 00:07:53.198 --rc genhtml_function_coverage=1 00:07:53.198 --rc genhtml_legend=1 00:07:53.198 --rc geninfo_all_blocks=1 00:07:53.198 --rc geninfo_unexecuted_blocks=1 00:07:53.198 00:07:53.198 ' 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.198 --rc genhtml_branch_coverage=1 00:07:53.198 --rc genhtml_function_coverage=1 00:07:53.198 --rc genhtml_legend=1 00:07:53.198 --rc geninfo_all_blocks=1 00:07:53.198 --rc geninfo_unexecuted_blocks=1 00:07:53.198 00:07:53.198 ' 00:07:53.198 13:42:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:53.198 13:42:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59553 00:07:53.198 13:42:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:53.198 13:42:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59553 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59553 ']' 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.198 13:42:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.456 [2024-10-01 13:42:03.408590] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:53.456 [2024-10-01 13:42:03.409364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:07:53.456 [2024-10-01 13:42:03.548332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.715 [2024-10-01 13:42:03.688318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.715 [2024-10-01 13:42:03.770601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.281 13:42:04 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.281 13:42:04 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:54.281 13:42:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:54.541 { 00:07:54.541 "version": "SPDK v25.01-pre git sha1 3a41ae5b3", 00:07:54.541 "fields": { 00:07:54.541 "major": 25, 00:07:54.541 "minor": 1, 00:07:54.541 "patch": 0, 00:07:54.541 "suffix": "-pre", 00:07:54.541 "commit": "3a41ae5b3" 00:07:54.541 } 00:07:54.541 } 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:54.800 13:42:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:54.800 13:42:04 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.059 request: 00:07:55.059 { 00:07:55.059 "method": "env_dpdk_get_mem_stats", 00:07:55.059 "req_id": 1 00:07:55.059 } 00:07:55.059 Got JSON-RPC error response 00:07:55.059 response: 00:07:55.059 { 00:07:55.059 "code": -32601, 00:07:55.059 "message": "Method not found" 00:07:55.059 } 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.059 13:42:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59553 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59553 ']' 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59553 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59553 00:07:55.059 killing process with pid 59553 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59553' 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@969 -- # kill 59553 00:07:55.059 13:42:05 app_cmdline -- common/autotest_common.sh@974 -- # wait 59553 00:07:55.634 ************************************ 00:07:55.634 END TEST app_cmdline 00:07:55.634 ************************************ 00:07:55.634 00:07:55.634 real 0m2.467s 00:07:55.634 user 0m3.141s 00:07:55.634 sys 0m0.528s 00:07:55.634 13:42:05 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.634 13:42:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.634 13:42:05 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.634 13:42:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.634 13:42:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.634 13:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:55.634 ************************************ 00:07:55.634 START TEST version 00:07:55.634 ************************************ 00:07:55.634 13:42:05 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.634 * Looking for test storage... 00:07:55.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:55.634 13:42:05 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:55.634 13:42:05 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:55.634 13:42:05 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:55.898 13:42:05 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:55.898 13:42:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.898 13:42:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.898 13:42:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.898 13:42:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.898 13:42:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.898 13:42:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.898 13:42:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.898 13:42:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.898 13:42:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.898 13:42:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.898 13:42:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.898 13:42:05 version -- scripts/common.sh@344 -- # case "$op" in 00:07:55.898 13:42:05 version -- scripts/common.sh@345 -- # : 1 00:07:55.898 13:42:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.898 13:42:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.898 13:42:05 version -- scripts/common.sh@365 -- # decimal 1 00:07:55.898 13:42:05 version -- scripts/common.sh@353 -- # local d=1 00:07:55.898 13:42:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.898 13:42:05 version -- scripts/common.sh@355 -- # echo 1 00:07:55.898 13:42:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.898 13:42:05 version -- scripts/common.sh@366 -- # decimal 2 00:07:55.898 13:42:05 version -- scripts/common.sh@353 -- # local d=2 00:07:55.898 13:42:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.898 13:42:05 version -- scripts/common.sh@355 -- # echo 2 00:07:55.898 13:42:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.898 13:42:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.898 13:42:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.898 13:42:05 version -- scripts/common.sh@368 -- # return 0 00:07:55.898 13:42:05 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.898 13:42:05 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:55.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.898 --rc genhtml_branch_coverage=1 00:07:55.898 --rc genhtml_function_coverage=1 00:07:55.898 --rc genhtml_legend=1 00:07:55.898 --rc geninfo_all_blocks=1 00:07:55.898 --rc geninfo_unexecuted_blocks=1 00:07:55.898 00:07:55.898 ' 00:07:55.898 13:42:05 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:55.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.898 --rc genhtml_branch_coverage=1 00:07:55.898 --rc genhtml_function_coverage=1 00:07:55.898 --rc genhtml_legend=1 00:07:55.898 --rc geninfo_all_blocks=1 00:07:55.898 --rc geninfo_unexecuted_blocks=1 00:07:55.898 00:07:55.898 ' 00:07:55.898 13:42:05 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:55.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.898 --rc genhtml_branch_coverage=1 00:07:55.898 --rc genhtml_function_coverage=1 00:07:55.898 --rc genhtml_legend=1 00:07:55.898 --rc geninfo_all_blocks=1 00:07:55.898 --rc geninfo_unexecuted_blocks=1 00:07:55.898 00:07:55.898 ' 00:07:55.898 13:42:05 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:55.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.898 --rc genhtml_branch_coverage=1 00:07:55.898 --rc genhtml_function_coverage=1 00:07:55.898 --rc genhtml_legend=1 00:07:55.898 --rc geninfo_all_blocks=1 00:07:55.898 --rc geninfo_unexecuted_blocks=1 00:07:55.898 00:07:55.898 ' 00:07:55.898 13:42:05 version -- app/version.sh@17 -- # get_header_version major 00:07:55.898 13:42:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.898 13:42:05 version -- app/version.sh@14 -- # cut -f2 00:07:55.898 13:42:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.898 13:42:05 version -- app/version.sh@17 -- # major=25 00:07:55.898 13:42:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:55.898 13:42:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.898 13:42:05 version -- app/version.sh@14 -- # cut -f2 00:07:55.898 13:42:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.898 13:42:05 version -- app/version.sh@18 -- # minor=1 00:07:55.898 13:42:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:55.898 13:42:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.898 13:42:05 version -- app/version.sh@14 -- # cut -f2 00:07:55.899 13:42:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.899 13:42:05 version -- app/version.sh@19 -- # patch=0 00:07:55.899 13:42:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:55.899 13:42:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.899 13:42:05 version -- app/version.sh@14 -- # cut -f2 00:07:55.899 13:42:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.899 13:42:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:55.899 13:42:05 version -- app/version.sh@22 -- # version=25.1 00:07:55.899 13:42:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:55.899 13:42:05 version -- app/version.sh@28 -- # version=25.1rc0 00:07:55.899 13:42:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:55.899 13:42:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:55.899 13:42:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:55.899 13:42:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:55.899 00:07:55.899 real 0m0.266s 00:07:55.899 user 0m0.185s 00:07:55.899 sys 0m0.113s 00:07:55.899 ************************************ 00:07:55.899 END TEST version 00:07:55.899 ************************************ 00:07:55.899 13:42:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.899 13:42:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:55.899 13:42:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:55.899 13:42:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:55.899 13:42:05 -- spdk/autotest.sh@194 -- # uname -s 00:07:55.899 13:42:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:55.899 13:42:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:55.899 13:42:05 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:55.899 13:42:05 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:55.899 13:42:05 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:55.899 13:42:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.899 13:42:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.899 13:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:55.899 ************************************ 00:07:55.899 START TEST spdk_dd 00:07:55.899 ************************************ 00:07:55.899 13:42:05 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:55.899 * Looking for test storage... 00:07:55.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:55.899 13:42:06 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:55.899 13:42:06 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:55.899 13:42:06 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:56.158 13:42:06 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:56.158 13:42:06 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.158 13:42:06 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.158 --rc genhtml_branch_coverage=1 00:07:56.158 --rc genhtml_function_coverage=1 00:07:56.158 --rc genhtml_legend=1 00:07:56.158 --rc geninfo_all_blocks=1 00:07:56.158 --rc geninfo_unexecuted_blocks=1 00:07:56.158 00:07:56.158 ' 00:07:56.158 13:42:06 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.158 --rc genhtml_branch_coverage=1 00:07:56.158 --rc genhtml_function_coverage=1 00:07:56.158 --rc genhtml_legend=1 00:07:56.158 --rc geninfo_all_blocks=1 00:07:56.158 --rc geninfo_unexecuted_blocks=1 00:07:56.158 00:07:56.158 ' 00:07:56.158 13:42:06 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.158 --rc genhtml_branch_coverage=1 00:07:56.158 --rc genhtml_function_coverage=1 00:07:56.158 --rc genhtml_legend=1 00:07:56.158 --rc geninfo_all_blocks=1 00:07:56.158 --rc geninfo_unexecuted_blocks=1 00:07:56.158 00:07:56.158 ' 00:07:56.158 13:42:06 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.158 --rc genhtml_branch_coverage=1 00:07:56.158 --rc genhtml_function_coverage=1 00:07:56.158 --rc genhtml_legend=1 00:07:56.158 --rc geninfo_all_blocks=1 00:07:56.158 --rc geninfo_unexecuted_blocks=1 00:07:56.158 00:07:56.158 ' 00:07:56.158 13:42:06 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.158 13:42:06 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.158 13:42:06 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.158 13:42:06 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.158 13:42:06 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.158 13:42:06 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:56.158 13:42:06 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.158 13:42:06 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:56.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.417 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:56.417 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:56.417 13:42:06 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:56.417 13:42:06 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:56.417 13:42:06 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:56.417 13:42:06 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:56.417 13:42:06 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:56.418 13:42:06 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:56.418 13:42:06 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:56.418 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:56.677 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.678 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:56.679 * spdk_dd linked to liburing 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:56.679 13:42:06 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:56.679 13:42:06 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:56.679 13:42:06 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:56.679 13:42:06 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:56.679 13:42:06 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:56.679 13:42:06 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.679 13:42:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:56.679 ************************************ 00:07:56.679 START TEST spdk_dd_basic_rw 00:07:56.679 ************************************ 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:56.679 * Looking for test storage... 00:07:56.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.679 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.680 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:56.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.940 --rc genhtml_branch_coverage=1 00:07:56.940 --rc genhtml_function_coverage=1 00:07:56.940 --rc genhtml_legend=1 00:07:56.940 --rc geninfo_all_blocks=1 00:07:56.940 --rc geninfo_unexecuted_blocks=1 00:07:56.940 00:07:56.940 ' 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:56.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.940 --rc genhtml_branch_coverage=1 00:07:56.940 --rc genhtml_function_coverage=1 00:07:56.940 --rc genhtml_legend=1 00:07:56.940 --rc geninfo_all_blocks=1 00:07:56.940 --rc geninfo_unexecuted_blocks=1 00:07:56.940 00:07:56.940 ' 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:56.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.940 --rc genhtml_branch_coverage=1 00:07:56.940 --rc genhtml_function_coverage=1 00:07:56.940 --rc genhtml_legend=1 00:07:56.940 --rc geninfo_all_blocks=1 00:07:56.940 --rc geninfo_unexecuted_blocks=1 00:07:56.940 00:07:56.940 ' 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:56.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.940 --rc genhtml_branch_coverage=1 00:07:56.940 --rc genhtml_function_coverage=1 00:07:56.940 --rc genhtml_legend=1 00:07:56.940 --rc geninfo_all_blocks=1 00:07:56.940 --rc geninfo_unexecuted_blocks=1 00:07:56.940 00:07:56.940 ' 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:56.940 13:42:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:56.941 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:56.941 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.942 ************************************ 00:07:56.942 START TEST dd_bs_lt_native_bs 00:07:56.942 ************************************ 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.942 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.226 [2024-10-01 13:42:07.122102] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:57.226 [2024-10-01 13:42:07.122230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:07:57.226 { 00:07:57.226 "subsystems": [ 00:07:57.226 { 00:07:57.226 "subsystem": "bdev", 00:07:57.226 "config": [ 00:07:57.226 { 00:07:57.226 "params": { 00:07:57.226 "trtype": "pcie", 00:07:57.226 "traddr": "0000:00:10.0", 00:07:57.226 "name": "Nvme0" 00:07:57.226 }, 00:07:57.226 "method": "bdev_nvme_attach_controller" 00:07:57.226 }, 00:07:57.226 { 00:07:57.226 "method": "bdev_wait_for_examine" 00:07:57.226 } 00:07:57.226 ] 00:07:57.226 } 00:07:57.226 ] 00:07:57.226 } 00:07:57.226 [2024-10-01 13:42:07.258140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.226 [2024-10-01 13:42:07.398034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.484 [2024-10-01 13:42:07.459106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.484 [2024-10-01 13:42:07.575811] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:57.484 [2024-10-01 13:42:07.575949] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.744 [2024-10-01 13:42:07.719414] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.744 00:07:57.744 real 0m0.794s 00:07:57.744 user 0m0.579s 00:07:57.744 sys 0m0.179s 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:57.744 ************************************ 00:07:57.744 END TEST dd_bs_lt_native_bs 00:07:57.744 ************************************ 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.744 ************************************ 00:07:57.744 START TEST dd_rw 00:07:57.744 ************************************ 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:57.744 13:42:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.715 13:42:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:58.715 13:42:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:58.715 13:42:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:58.715 13:42:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.715 { 00:07:58.715 "subsystems": [ 00:07:58.715 { 00:07:58.715 "subsystem": "bdev", 00:07:58.715 "config": [ 00:07:58.715 { 00:07:58.715 "params": { 00:07:58.715 "trtype": "pcie", 00:07:58.715 "traddr": "0000:00:10.0", 00:07:58.715 "name": "Nvme0" 00:07:58.715 }, 00:07:58.715 "method": "bdev_nvme_attach_controller" 00:07:58.715 }, 00:07:58.715 { 00:07:58.715 "method": "bdev_wait_for_examine" 00:07:58.715 } 00:07:58.715 ] 00:07:58.715 } 00:07:58.715 ] 00:07:58.715 } 00:07:58.715 [2024-10-01 13:42:08.671119] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:58.715 [2024-10-01 13:42:08.671221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:07:58.715 [2024-10-01 13:42:08.810496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.973 [2024-10-01 13:42:08.958256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.973 [2024-10-01 13:42:09.015862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.231  Copying: 60/60 [kB] (average 29 MBps) 00:07:59.231 00:07:59.231 13:42:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:59.231 13:42:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:59.231 13:42:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.231 13:42:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.489 { 00:07:59.489 "subsystems": [ 00:07:59.489 { 00:07:59.489 "subsystem": "bdev", 00:07:59.489 "config": [ 00:07:59.489 { 00:07:59.489 "params": { 00:07:59.489 "trtype": "pcie", 00:07:59.489 "traddr": "0000:00:10.0", 00:07:59.489 "name": "Nvme0" 00:07:59.489 }, 00:07:59.489 "method": "bdev_nvme_attach_controller" 00:07:59.489 }, 00:07:59.489 { 00:07:59.489 "method": "bdev_wait_for_examine" 00:07:59.489 } 00:07:59.489 ] 00:07:59.489 } 00:07:59.489 ] 00:07:59.489 } 00:07:59.489 [2024-10-01 13:42:09.431675] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:59.489 [2024-10-01 13:42:09.431789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59954 ] 00:07:59.489 [2024-10-01 13:42:09.566682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.747 [2024-10-01 13:42:09.686327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.747 [2024-10-01 13:42:09.740126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.006  Copying: 60/60 [kB] (average 19 MBps) 00:08:00.006 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.006 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 [2024-10-01 13:42:10.155418] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:00.006 [2024-10-01 13:42:10.155527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59975 ] 00:08:00.006 { 00:08:00.006 "subsystems": [ 00:08:00.006 { 00:08:00.006 "subsystem": "bdev", 00:08:00.006 "config": [ 00:08:00.006 { 00:08:00.006 "params": { 00:08:00.006 "trtype": "pcie", 00:08:00.006 "traddr": "0000:00:10.0", 00:08:00.006 "name": "Nvme0" 00:08:00.006 }, 00:08:00.006 "method": "bdev_nvme_attach_controller" 00:08:00.006 }, 00:08:00.006 { 00:08:00.006 "method": "bdev_wait_for_examine" 00:08:00.006 } 00:08:00.006 ] 00:08:00.006 } 00:08:00.006 ] 00:08:00.006 } 00:08:00.264 [2024-10-01 13:42:10.293898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.264 [2024-10-01 13:42:10.413227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.523 [2024-10-01 13:42:10.467628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.782  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:00.782 00:08:00.782 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:00.782 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:00.782 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:00.782 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:00.782 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:00.782 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:00.782 13:42:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.351 13:42:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:01.351 13:42:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:01.351 13:42:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.351 13:42:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.610 { 00:08:01.610 "subsystems": [ 00:08:01.610 { 00:08:01.610 "subsystem": "bdev", 00:08:01.610 "config": [ 00:08:01.610 { 00:08:01.610 "params": { 00:08:01.610 "trtype": "pcie", 00:08:01.610 "traddr": "0000:00:10.0", 00:08:01.610 "name": "Nvme0" 00:08:01.610 }, 00:08:01.610 "method": "bdev_nvme_attach_controller" 00:08:01.610 }, 00:08:01.610 { 00:08:01.610 "method": "bdev_wait_for_examine" 00:08:01.610 } 00:08:01.610 ] 00:08:01.610 } 00:08:01.610 ] 00:08:01.610 } 00:08:01.610 [2024-10-01 13:42:11.594657] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:01.610 [2024-10-01 13:42:11.594768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:08:01.610 [2024-10-01 13:42:11.732854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.868 [2024-10-01 13:42:11.884597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.868 [2024-10-01 13:42:11.942380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.126  Copying: 60/60 [kB] (average 58 MBps) 00:08:02.126 00:08:02.126 13:42:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:02.126 13:42:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:02.126 13:42:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.126 13:42:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.383 [2024-10-01 13:42:12.354810] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:02.383 [2024-10-01 13:42:12.354908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60015 ] 00:08:02.383 { 00:08:02.383 "subsystems": [ 00:08:02.383 { 00:08:02.383 "subsystem": "bdev", 00:08:02.383 "config": [ 00:08:02.383 { 00:08:02.383 "params": { 00:08:02.383 "trtype": "pcie", 00:08:02.383 "traddr": "0000:00:10.0", 00:08:02.383 "name": "Nvme0" 00:08:02.383 }, 00:08:02.383 "method": "bdev_nvme_attach_controller" 00:08:02.383 }, 00:08:02.383 { 00:08:02.383 "method": "bdev_wait_for_examine" 00:08:02.383 } 00:08:02.383 ] 00:08:02.383 } 00:08:02.383 ] 00:08:02.383 } 00:08:02.383 [2024-10-01 13:42:12.489397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.641 [2024-10-01 13:42:12.610524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.641 [2024-10-01 13:42:12.671009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.900  Copying: 60/60 [kB] (average 58 MBps) 00:08:02.900 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.900 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.158 { 00:08:03.159 "subsystems": [ 00:08:03.159 { 00:08:03.159 "subsystem": "bdev", 00:08:03.159 "config": [ 00:08:03.159 { 00:08:03.159 "params": { 00:08:03.159 "trtype": "pcie", 00:08:03.159 "traddr": "0000:00:10.0", 00:08:03.159 "name": "Nvme0" 00:08:03.159 }, 00:08:03.159 "method": "bdev_nvme_attach_controller" 00:08:03.159 }, 00:08:03.159 { 00:08:03.159 "method": "bdev_wait_for_examine" 00:08:03.159 } 00:08:03.159 ] 00:08:03.159 } 00:08:03.159 ] 00:08:03.159 } 00:08:03.159 [2024-10-01 13:42:13.091335] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:03.159 [2024-10-01 13:42:13.091431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60036 ] 00:08:03.159 [2024-10-01 13:42:13.228075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.417 [2024-10-01 13:42:13.349788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.417 [2024-10-01 13:42:13.406160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.676  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:03.676 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:03.676 13:42:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.611 13:42:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:04.611 13:42:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.611 13:42:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.611 13:42:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.611 [2024-10-01 13:42:14.529369] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:04.611 [2024-10-01 13:42:14.529490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60055 ] 00:08:04.611 { 00:08:04.611 "subsystems": [ 00:08:04.611 { 00:08:04.611 "subsystem": "bdev", 00:08:04.611 "config": [ 00:08:04.611 { 00:08:04.611 "params": { 00:08:04.611 "trtype": "pcie", 00:08:04.611 "traddr": "0000:00:10.0", 00:08:04.611 "name": "Nvme0" 00:08:04.611 }, 00:08:04.611 "method": "bdev_nvme_attach_controller" 00:08:04.611 }, 00:08:04.611 { 00:08:04.611 "method": "bdev_wait_for_examine" 00:08:04.611 } 00:08:04.611 ] 00:08:04.611 } 00:08:04.611 ] 00:08:04.611 } 00:08:04.611 [2024-10-01 13:42:14.662430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.611 [2024-10-01 13:42:14.788507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.869 [2024-10-01 13:42:14.844214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.127  Copying: 56/56 [kB] (average 54 MBps) 00:08:05.127 00:08:05.127 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:05.127 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:05.127 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.127 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.127 { 00:08:05.127 "subsystems": [ 00:08:05.127 { 00:08:05.127 "subsystem": "bdev", 00:08:05.127 "config": [ 00:08:05.127 { 00:08:05.127 "params": { 00:08:05.127 "trtype": "pcie", 00:08:05.127 "traddr": "0000:00:10.0", 00:08:05.127 "name": "Nvme0" 00:08:05.127 }, 00:08:05.127 "method": "bdev_nvme_attach_controller" 00:08:05.127 }, 00:08:05.127 { 00:08:05.127 "method": "bdev_wait_for_examine" 00:08:05.127 } 00:08:05.127 ] 00:08:05.127 } 00:08:05.127 ] 00:08:05.127 } 00:08:05.127 [2024-10-01 13:42:15.254840] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:05.127 [2024-10-01 13:42:15.254979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60074 ] 00:08:05.384 [2024-10-01 13:42:15.390858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.384 [2024-10-01 13:42:15.535990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.641 [2024-10-01 13:42:15.592600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.898  Copying: 56/56 [kB] (average 54 MBps) 00:08:05.898 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.898 13:42:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.898 { 00:08:05.898 "subsystems": [ 00:08:05.898 { 00:08:05.898 "subsystem": "bdev", 00:08:05.898 "config": [ 00:08:05.898 { 00:08:05.898 "params": { 00:08:05.898 "trtype": "pcie", 00:08:05.898 "traddr": "0000:00:10.0", 00:08:05.898 "name": "Nvme0" 00:08:05.898 }, 00:08:05.898 "method": "bdev_nvme_attach_controller" 00:08:05.898 }, 00:08:05.898 { 00:08:05.898 "method": "bdev_wait_for_examine" 00:08:05.898 } 00:08:05.898 ] 00:08:05.898 } 00:08:05.898 ] 00:08:05.898 } 00:08:05.898 [2024-10-01 13:42:15.998383] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:05.898 [2024-10-01 13:42:15.998479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60091 ] 00:08:06.155 [2024-10-01 13:42:16.134485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.155 [2024-10-01 13:42:16.259306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.155 [2024-10-01 13:42:16.317719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.669  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.669 00:08:06.669 13:42:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:06.669 13:42:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:06.669 13:42:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:06.669 13:42:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:06.669 13:42:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:06.669 13:42:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:06.669 13:42:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.624 13:42:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:07.624 13:42:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:07.624 13:42:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.624 13:42:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.624 { 00:08:07.624 "subsystems": [ 00:08:07.624 { 00:08:07.624 "subsystem": "bdev", 00:08:07.624 "config": [ 00:08:07.624 { 00:08:07.624 "params": { 00:08:07.624 "trtype": "pcie", 00:08:07.624 "traddr": "0000:00:10.0", 00:08:07.624 "name": "Nvme0" 00:08:07.624 }, 00:08:07.624 "method": "bdev_nvme_attach_controller" 00:08:07.624 }, 00:08:07.624 { 00:08:07.624 "method": "bdev_wait_for_examine" 00:08:07.624 } 00:08:07.624 ] 00:08:07.624 } 00:08:07.624 ] 00:08:07.624 } 00:08:07.624 [2024-10-01 13:42:17.495510] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:07.624 [2024-10-01 13:42:17.495680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60115 ] 00:08:07.624 [2024-10-01 13:42:17.643787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.625 [2024-10-01 13:42:17.776118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.883 [2024-10-01 13:42:17.831687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.141  Copying: 56/56 [kB] (average 54 MBps) 00:08:08.141 00:08:08.141 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:08.141 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:08.141 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:08.141 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.141 [2024-10-01 13:42:18.234847] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:08.141 [2024-10-01 13:42:18.235517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60134 ] 00:08:08.141 { 00:08:08.141 "subsystems": [ 00:08:08.141 { 00:08:08.141 "subsystem": "bdev", 00:08:08.141 "config": [ 00:08:08.141 { 00:08:08.141 "params": { 00:08:08.141 "trtype": "pcie", 00:08:08.141 "traddr": "0000:00:10.0", 00:08:08.141 "name": "Nvme0" 00:08:08.141 }, 00:08:08.141 "method": "bdev_nvme_attach_controller" 00:08:08.141 }, 00:08:08.141 { 00:08:08.141 "method": "bdev_wait_for_examine" 00:08:08.141 } 00:08:08.141 ] 00:08:08.141 } 00:08:08.141 ] 00:08:08.141 } 00:08:08.399 [2024-10-01 13:42:18.373739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.399 [2024-10-01 13:42:18.520979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.656 [2024-10-01 13:42:18.584653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.912  Copying: 56/56 [kB] (average 54 MBps) 00:08:08.912 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:08.912 13:42:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.912 [2024-10-01 13:42:19.020375] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:08.912 [2024-10-01 13:42:19.020525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60150 ] 00:08:08.912 { 00:08:08.912 "subsystems": [ 00:08:08.912 { 00:08:08.912 "subsystem": "bdev", 00:08:08.912 "config": [ 00:08:08.912 { 00:08:08.912 "params": { 00:08:08.912 "trtype": "pcie", 00:08:08.912 "traddr": "0000:00:10.0", 00:08:08.912 "name": "Nvme0" 00:08:08.912 }, 00:08:08.912 "method": "bdev_nvme_attach_controller" 00:08:08.912 }, 00:08:08.912 { 00:08:08.912 "method": "bdev_wait_for_examine" 00:08:08.912 } 00:08:08.912 ] 00:08:08.912 } 00:08:08.912 ] 00:08:08.912 } 00:08:09.169 [2024-10-01 13:42:19.158021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.169 [2024-10-01 13:42:19.301471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.425 [2024-10-01 13:42:19.360464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.682  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:09.682 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:09.682 13:42:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.249 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:10.249 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:10.249 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.249 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.249 [2024-10-01 13:42:20.258776] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:10.249 [2024-10-01 13:42:20.258866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60173 ] 00:08:10.249 { 00:08:10.249 "subsystems": [ 00:08:10.249 { 00:08:10.249 "subsystem": "bdev", 00:08:10.249 "config": [ 00:08:10.249 { 00:08:10.249 "params": { 00:08:10.249 "trtype": "pcie", 00:08:10.249 "traddr": "0000:00:10.0", 00:08:10.249 "name": "Nvme0" 00:08:10.249 }, 00:08:10.249 "method": "bdev_nvme_attach_controller" 00:08:10.249 }, 00:08:10.249 { 00:08:10.249 "method": "bdev_wait_for_examine" 00:08:10.249 } 00:08:10.249 ] 00:08:10.249 } 00:08:10.249 ] 00:08:10.249 } 00:08:10.249 [2024-10-01 13:42:20.387403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.508 [2024-10-01 13:42:20.509308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.508 [2024-10-01 13:42:20.566014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.767  Copying: 48/48 [kB] (average 46 MBps) 00:08:10.767 00:08:10.767 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:10.767 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:10.767 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.767 13:42:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 { 00:08:11.052 "subsystems": [ 00:08:11.052 { 00:08:11.052 "subsystem": "bdev", 00:08:11.052 "config": [ 00:08:11.052 { 00:08:11.052 "params": { 00:08:11.052 "trtype": "pcie", 00:08:11.052 "traddr": "0000:00:10.0", 00:08:11.052 "name": "Nvme0" 00:08:11.052 }, 00:08:11.052 "method": "bdev_nvme_attach_controller" 00:08:11.052 }, 00:08:11.052 { 00:08:11.052 "method": "bdev_wait_for_examine" 00:08:11.052 } 00:08:11.052 ] 00:08:11.052 } 00:08:11.052 ] 00:08:11.052 } 00:08:11.052 [2024-10-01 13:42:20.981421] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:11.052 [2024-10-01 13:42:20.981524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60188 ] 00:08:11.052 [2024-10-01 13:42:21.120841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.310 [2024-10-01 13:42:21.249545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.310 [2024-10-01 13:42:21.315202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.568  Copying: 48/48 [kB] (average 46 MBps) 00:08:11.568 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.568 13:42:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.568 [2024-10-01 13:42:21.729424] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:11.568 [2024-10-01 13:42:21.729536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:08:11.568 { 00:08:11.568 "subsystems": [ 00:08:11.568 { 00:08:11.568 "subsystem": "bdev", 00:08:11.568 "config": [ 00:08:11.568 { 00:08:11.568 "params": { 00:08:11.568 "trtype": "pcie", 00:08:11.568 "traddr": "0000:00:10.0", 00:08:11.568 "name": "Nvme0" 00:08:11.568 }, 00:08:11.568 "method": "bdev_nvme_attach_controller" 00:08:11.568 }, 00:08:11.568 { 00:08:11.568 "method": "bdev_wait_for_examine" 00:08:11.568 } 00:08:11.568 ] 00:08:11.568 } 00:08:11.568 ] 00:08:11.568 } 00:08:11.826 [2024-10-01 13:42:21.867401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.826 [2024-10-01 13:42:21.983266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.084 [2024-10-01 13:42:22.039221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.342  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:12.342 00:08:12.342 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:12.342 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:12.342 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:12.342 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:12.342 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:12.342 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:12.342 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.908 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:12.908 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:12.908 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.908 13:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.908 [2024-10-01 13:42:22.953175] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:12.908 [2024-10-01 13:42:22.953303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60230 ] 00:08:12.908 { 00:08:12.908 "subsystems": [ 00:08:12.908 { 00:08:12.908 "subsystem": "bdev", 00:08:12.908 "config": [ 00:08:12.908 { 00:08:12.908 "params": { 00:08:12.908 "trtype": "pcie", 00:08:12.908 "traddr": "0000:00:10.0", 00:08:12.908 "name": "Nvme0" 00:08:12.908 }, 00:08:12.908 "method": "bdev_nvme_attach_controller" 00:08:12.908 }, 00:08:12.908 { 00:08:12.908 "method": "bdev_wait_for_examine" 00:08:12.908 } 00:08:12.908 ] 00:08:12.908 } 00:08:12.908 ] 00:08:12.908 } 00:08:13.167 [2024-10-01 13:42:23.094271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.167 [2024-10-01 13:42:23.230196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.167 [2024-10-01 13:42:23.286321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.685  Copying: 48/48 [kB] (average 46 MBps) 00:08:13.685 00:08:13.685 13:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:13.685 13:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:13.685 13:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.685 13:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.685 { 00:08:13.685 "subsystems": [ 00:08:13.685 { 00:08:13.685 "subsystem": "bdev", 00:08:13.685 "config": [ 00:08:13.685 { 00:08:13.685 "params": { 00:08:13.685 "trtype": "pcie", 00:08:13.685 "traddr": "0000:00:10.0", 00:08:13.685 "name": "Nvme0" 00:08:13.685 }, 00:08:13.685 "method": "bdev_nvme_attach_controller" 00:08:13.685 }, 00:08:13.685 { 00:08:13.685 "method": "bdev_wait_for_examine" 00:08:13.685 } 00:08:13.685 ] 00:08:13.685 } 00:08:13.685 ] 00:08:13.685 } 00:08:13.685 [2024-10-01 13:42:23.688866] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:13.685 [2024-10-01 13:42:23.689005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60243 ] 00:08:13.685 [2024-10-01 13:42:23.827658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.950 [2024-10-01 13:42:23.950765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.950 [2024-10-01 13:42:24.005349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.208  Copying: 48/48 [kB] (average 46 MBps) 00:08:14.208 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.208 13:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.466 { 00:08:14.466 "subsystems": [ 00:08:14.466 { 00:08:14.466 "subsystem": "bdev", 00:08:14.466 "config": [ 00:08:14.466 { 00:08:14.466 "params": { 00:08:14.466 "trtype": "pcie", 00:08:14.466 "traddr": "0000:00:10.0", 00:08:14.466 "name": "Nvme0" 00:08:14.466 }, 00:08:14.466 "method": "bdev_nvme_attach_controller" 00:08:14.466 }, 00:08:14.466 { 00:08:14.466 "method": "bdev_wait_for_examine" 00:08:14.466 } 00:08:14.466 ] 00:08:14.466 } 00:08:14.466 ] 00:08:14.466 } 00:08:14.466 [2024-10-01 13:42:24.426803] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:14.466 [2024-10-01 13:42:24.426969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60259 ] 00:08:14.466 [2024-10-01 13:42:24.565190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.723 [2024-10-01 13:42:24.696621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.723 [2024-10-01 13:42:24.754484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.982  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:14.982 00:08:14.982 00:08:14.982 real 0m17.205s 00:08:14.982 user 0m12.871s 00:08:14.982 sys 0m5.862s 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.982 ************************************ 00:08:14.982 END TEST dd_rw 00:08:14.982 ************************************ 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.982 ************************************ 00:08:14.982 START TEST dd_rw_offset 00:08:14.982 ************************************ 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:14.982 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:15.242 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=dhqhaiq4rdfjjqx3dxa405zilmt5jc5bsbzby636uno0bmkcpg0f7yx7ohbh8qb929ril3v53knyaponvv1jijcqlddeaqlgv9muabz1n4y583rk8mu2czjx1ep547xfyabq4gsjche6c64uj7mu9qjhuv8g3zhrr9pe9y60btn4gbqq47dhv0jn4c8vahn0l0bo484mvyan9nruuq6lhlmot20mrpna2v1mf0tobrphafntejpvhppyc74vj5bhnwdx7nwx2s94g1n3cvepn1iyrub7etjcsy3x6ncyejuntecd2ccijb64i7oirsubizqiuvu6duqh3fyubmjlemn2m7lh1u3udzb677kl1awox8fwz3vw9zasv8muxp3ti7nsuw55ly4oe63xjlb0jgruk3jse51uxjxhdrtcrb6oxbwyomrubhwnu9tzsaf1k4bpox09fe9f6z8fsfol08526olr18ehosmwl0e0m56i7fbhb10s39con5vnejsfbdbx8nb44swdo1c5csiwxdoot75p4civkhhhgnlianky9q2x5rzhd3j6my7ujw4u4yunb82xifuwuhavs37ncasfna8jt9uzwngygmpy5muifgp7pvypk8608snw6yph0d2eug0xtkoganql11yhjcguyt5n5ko2uf2d7j5h24k4expmqx0hbh4g1qrr9vvpxjhstdfufpt5l6320gj21altnuskgy04nhe45ndgs96f906duhjfh39gf7zfmzj5x6z8go8a2frjpe7mbm19l9aotn1u3sl5nd70vcegqrlg34lua9bwujh4e5p6l2q1sh3b6r8t6xsfvodr1ynkkmu023d12ognxmvnndfzg0qolka9k94q316rw7l0wpl5dxesxbtdr2m05jmgj2kxwrzqob31c1gmmw7t3slo00m6kourqvf8tbp6h1uzawxpbxvuy6vfpmreg7d3y2v2824m35gi38mkp2i97e05lwffygyh66zyjnz7y38b43wictkqu3vojvob1ilskay7cb91dy70epqgef62pfeniqsa8fdoejpjkhbzyvtygzng3490vmsw8ym6flm7bddmwo6uo5ztba0yzr1d3hztnpw9obvl4u40ayemyghm9429hd1m5mn157iyo51n8ov5t10g28igz8rv4aooeakniw0y5xogtrdfb23lscgsklsyn1nww98r0y9g6rdzxyqplk97g68egeprsg5d0l67s23tawiswurzebd0s7e1pvmt9hwycjo9g1nb22dnqq3okkm7f7px7b0cju9mmihvdjb53cnumu39zx5yc42yts7t4b38w3vshxwx04pf1nv71rulj4c5yxtsgplzf00lamky1458k0n6grljdlzhhekyftj1vw9ub29m1tsm8u8bgtjvzw4we1v7vlgd1bw072a7dilvawukh1jzx29ifto3b5t1sluhlgan2h2l2zsl6e4k1toti71klja8tay53rnmzfi778egggxd3fx70hr240a4kvvkp3lmkxpcsuvvtgud9d9a3nvwwn574mc2ybonrycuchk6glz3ljzqmekh8ole5pocvwawy7s01ogx43zoelusc7le0lzcaip65oxze3hhpquc8ipi86bwgqxge5gzykvg4edjfl6zp7ekflmdh1p17jtj3qisg1r1gvps4lysvchyq2fznzmgd5xd3ewtq1iljdcjus85dwsy2v7xb8q0wwkyc4g9veokn70974xnxj3czyxkeylqnmpa0zsuj7me2urgwtj67mxf9g2idmusj4f1oauv6g4yj9dnu9qe7slct7xqasmizhejfefzy3j0p072cpu136sxj7uvjt9s52rav4pl5z6s9m749bsbwbe2e6611yppknzaf827h7a02k3l95ktwgy7o909xenlkw0c1uh5kqjm1mnz5gx8yfffwssx9mcfic78m6832lz4gy71opcotgafvdc3k15lk6t4ovx1j7wotexj1g9wdebshfjpvl6rk8gsbijer2z576x452wt8nzzte490apm6kdl0nuambfywdt1bl0uuxda89arwnkcr08o8xxgxenj2sfnny450buz1q5jh6kgj852karga8lfgzr3gbb9mo6dvqugbebnzpzbbfidpnzcux8r441np13mq9j3ji9gwx532x4ffsbptaewtf1kh64ebybancezxrru26hxmk7oo0p1m1dixl8wixi19mkf10zb6xs06a85fb6ytbu3va8x5pasksu6wslxtdhkrqfbcysd8ugxsudhpmtk24dwwt357k1vi2vp2vny1r4pdue2qla9ddmwns7ik8m9bwu3uf6lj3r6ripzj5mqkiivcu77zzxkrhp24iayr2jlem33t47icyifs5c2isb03fnuzpuhp7lzphvi0eqsr0k0lvcor3lg1om6i3271js5swx712gyhfw9dp9c1rlkk4emb61g2za887gnn7g14ocaz212ngyflhzi6bm3014iwm4dhehslz3lmg1vryenlutqtzq804d4qffqyzvx3vqy9i9f4yiu5pi5qjhhblxxha7ow9y8yk4010gfoqm2uqll51nrsmvxuxk0iqb56yr3flrayqvpz7c883lqidnb2fv7tlyj6pj75z11jk2qrb9tajp4l0ehjsunfhzey2mvuo6hi0v8x2ggcpnx5w1l9i9cwh5z7qq7md7chog2wg701jjikyje79raoptwo2r8w3b3bi6c9wxnacinpgghwt23mgxfiud8pm3wyefej8o0w3yfpqw3bxw2cyvigxl6chz9ow1h153029fs4of4piq5dbsoll3olxddzof5cw2cmpcj0e6swg6ber6glgn5dbx1cu3mivk2fwqx8igk6yoptpbuzct0usfxn2u1pp53umbjcow459dynn4opi2uak5fj763dtod0mk75quj507pqds2gwb2qs1vjbug3ry68k0w4nsqp9ai5j8lrm1tbudcsvqpk6io5tl4tti8ech3suhz9cb6jtiz65ujofwwounvd2j29uzz4xjru5m4keb3v6dicrimn4an6g1647e6znbra70y70br8s6o7fpgm1n30qtco1a3hllapb4hb51uoov4pgausylwn6dl9h82817jheb3jk2958qcgxkexey9jpkvfsbynfm400h2zg8hpoqv9xt7s0db4s5ag0um1i4zonvn8y555x6inbsmg6ixlfx7qss63x7f6jwu7pz1vsqm4skn2fzlb0lmr3xgmack0397hv08aha49e61v82xlkuqusljwrzma94akje67xdzehceamwlrhabpnh4zvj7mgae0uh0uknlz1l59jz2z01j75bjtfdmlc0ppioy2t3ksp1y4xfqwmp8ym61en7tm1lbx5475ydv4v3ia5ngypgz1cwshozp6sgo84rqz0xftpm8iygt91ilcvejzhtx07yforsdf1s2a6ia1tplu35evobbwv9iv9hu79xip24pl7k4kmjmrc6r3t1epliw7wdkebiglrl9dd6r7opr53dbu9prfu09tch5anfw753g1vo4496ppvv2tvmzvesswsdyvq0esvgrq5pi3wqa9fyiawnt8qty58dlv55cmxft9rcqdm0bwsjnp2eab78iksfzqgajs0y9a9ek2ongise6nd98v10ci5hz62piwo1sbgpbvxy1477r3a07zvimhe3gj21kh1plceg4vfwha01c02ak3c3j8shmo06ycluvmy2ho2be4yk02uwk49oavkkhhfpf3y9gwqszad2h1dab05n6pda3dpe6h0fw5k558mndg0udu0urmnyjr3wh69zokx1lvioopa9fjaz9oeuiaj1xi4abgrl6wy2ik6mfvz16rih92bwfwhau7y6b138vouvp9woyj9cyl3a64e416r87ic38d8lzkbjbgu54omjhjwczsqm75k9xvpdbrsfjotydcw0yey2skpl938k1wvvtyadcmbq9lxsgtgura68qfqv1p21mex6pr1ss4w1581ux 00:08:15.242 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:15.242 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:15.242 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:15.242 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 { 00:08:15.242 "subsystems": [ 00:08:15.242 { 00:08:15.242 "subsystem": "bdev", 00:08:15.242 "config": [ 00:08:15.242 { 00:08:15.242 "params": { 00:08:15.242 "trtype": "pcie", 00:08:15.242 "traddr": "0000:00:10.0", 00:08:15.242 "name": "Nvme0" 00:08:15.242 }, 00:08:15.242 "method": "bdev_nvme_attach_controller" 00:08:15.242 }, 00:08:15.242 { 00:08:15.242 "method": "bdev_wait_for_examine" 00:08:15.242 } 00:08:15.242 ] 00:08:15.242 } 00:08:15.242 ] 00:08:15.242 } 00:08:15.242 [2024-10-01 13:42:25.273871] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:15.242 [2024-10-01 13:42:25.274189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60295 ] 00:08:15.242 [2024-10-01 13:42:25.415949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.502 [2024-10-01 13:42:25.551956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.502 [2024-10-01 13:42:25.609714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.021  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:16.021 00:08:16.021 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:16.021 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:16.021 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:16.021 13:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:16.021 [2024-10-01 13:42:26.007507] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:16.021 [2024-10-01 13:42:26.007869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60314 ] 00:08:16.021 { 00:08:16.021 "subsystems": [ 00:08:16.021 { 00:08:16.021 "subsystem": "bdev", 00:08:16.021 "config": [ 00:08:16.021 { 00:08:16.021 "params": { 00:08:16.021 "trtype": "pcie", 00:08:16.021 "traddr": "0000:00:10.0", 00:08:16.021 "name": "Nvme0" 00:08:16.021 }, 00:08:16.021 "method": "bdev_nvme_attach_controller" 00:08:16.021 }, 00:08:16.021 { 00:08:16.021 "method": "bdev_wait_for_examine" 00:08:16.021 } 00:08:16.021 ] 00:08:16.021 } 00:08:16.021 ] 00:08:16.021 } 00:08:16.021 [2024-10-01 13:42:26.140170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.279 [2024-10-01 13:42:26.256602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.280 [2024-10-01 13:42:26.313454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.538  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:16.538 00:08:16.538 ************************************ 00:08:16.538 END TEST dd_rw_offset 00:08:16.538 ************************************ 00:08:16.538 13:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ dhqhaiq4rdfjjqx3dxa405zilmt5jc5bsbzby636uno0bmkcpg0f7yx7ohbh8qb929ril3v53knyaponvv1jijcqlddeaqlgv9muabz1n4y583rk8mu2czjx1ep547xfyabq4gsjche6c64uj7mu9qjhuv8g3zhrr9pe9y60btn4gbqq47dhv0jn4c8vahn0l0bo484mvyan9nruuq6lhlmot20mrpna2v1mf0tobrphafntejpvhppyc74vj5bhnwdx7nwx2s94g1n3cvepn1iyrub7etjcsy3x6ncyejuntecd2ccijb64i7oirsubizqiuvu6duqh3fyubmjlemn2m7lh1u3udzb677kl1awox8fwz3vw9zasv8muxp3ti7nsuw55ly4oe63xjlb0jgruk3jse51uxjxhdrtcrb6oxbwyomrubhwnu9tzsaf1k4bpox09fe9f6z8fsfol08526olr18ehosmwl0e0m56i7fbhb10s39con5vnejsfbdbx8nb44swdo1c5csiwxdoot75p4civkhhhgnlianky9q2x5rzhd3j6my7ujw4u4yunb82xifuwuhavs37ncasfna8jt9uzwngygmpy5muifgp7pvypk8608snw6yph0d2eug0xtkoganql11yhjcguyt5n5ko2uf2d7j5h24k4expmqx0hbh4g1qrr9vvpxjhstdfufpt5l6320gj21altnuskgy04nhe45ndgs96f906duhjfh39gf7zfmzj5x6z8go8a2frjpe7mbm19l9aotn1u3sl5nd70vcegqrlg34lua9bwujh4e5p6l2q1sh3b6r8t6xsfvodr1ynkkmu023d12ognxmvnndfzg0qolka9k94q316rw7l0wpl5dxesxbtdr2m05jmgj2kxwrzqob31c1gmmw7t3slo00m6kourqvf8tbp6h1uzawxpbxvuy6vfpmreg7d3y2v2824m35gi38mkp2i97e05lwffygyh66zyjnz7y38b43wictkqu3vojvob1ilskay7cb91dy70epqgef62pfeniqsa8fdoejpjkhbzyvtygzng3490vmsw8ym6flm7bddmwo6uo5ztba0yzr1d3hztnpw9obvl4u40ayemyghm9429hd1m5mn157iyo51n8ov5t10g28igz8rv4aooeakniw0y5xogtrdfb23lscgsklsyn1nww98r0y9g6rdzxyqplk97g68egeprsg5d0l67s23tawiswurzebd0s7e1pvmt9hwycjo9g1nb22dnqq3okkm7f7px7b0cju9mmihvdjb53cnumu39zx5yc42yts7t4b38w3vshxwx04pf1nv71rulj4c5yxtsgplzf00lamky1458k0n6grljdlzhhekyftj1vw9ub29m1tsm8u8bgtjvzw4we1v7vlgd1bw072a7dilvawukh1jzx29ifto3b5t1sluhlgan2h2l2zsl6e4k1toti71klja8tay53rnmzfi778egggxd3fx70hr240a4kvvkp3lmkxpcsuvvtgud9d9a3nvwwn574mc2ybonrycuchk6glz3ljzqmekh8ole5pocvwawy7s01ogx43zoelusc7le0lzcaip65oxze3hhpquc8ipi86bwgqxge5gzykvg4edjfl6zp7ekflmdh1p17jtj3qisg1r1gvps4lysvchyq2fznzmgd5xd3ewtq1iljdcjus85dwsy2v7xb8q0wwkyc4g9veokn70974xnxj3czyxkeylqnmpa0zsuj7me2urgwtj67mxf9g2idmusj4f1oauv6g4yj9dnu9qe7slct7xqasmizhejfefzy3j0p072cpu136sxj7uvjt9s52rav4pl5z6s9m749bsbwbe2e6611yppknzaf827h7a02k3l95ktwgy7o909xenlkw0c1uh5kqjm1mnz5gx8yfffwssx9mcfic78m6832lz4gy71opcotgafvdc3k15lk6t4ovx1j7wotexj1g9wdebshfjpvl6rk8gsbijer2z576x452wt8nzzte490apm6kdl0nuambfywdt1bl0uuxda89arwnkcr08o8xxgxenj2sfnny450buz1q5jh6kgj852karga8lfgzr3gbb9mo6dvqugbebnzpzbbfidpnzcux8r441np13mq9j3ji9gwx532x4ffsbptaewtf1kh64ebybancezxrru26hxmk7oo0p1m1dixl8wixi19mkf10zb6xs06a85fb6ytbu3va8x5pasksu6wslxtdhkrqfbcysd8ugxsudhpmtk24dwwt357k1vi2vp2vny1r4pdue2qla9ddmwns7ik8m9bwu3uf6lj3r6ripzj5mqkiivcu77zzxkrhp24iayr2jlem33t47icyifs5c2isb03fnuzpuhp7lzphvi0eqsr0k0lvcor3lg1om6i3271js5swx712gyhfw9dp9c1rlkk4emb61g2za887gnn7g14ocaz212ngyflhzi6bm3014iwm4dhehslz3lmg1vryenlutqtzq804d4qffqyzvx3vqy9i9f4yiu5pi5qjhhblxxha7ow9y8yk4010gfoqm2uqll51nrsmvxuxk0iqb56yr3flrayqvpz7c883lqidnb2fv7tlyj6pj75z11jk2qrb9tajp4l0ehjsunfhzey2mvuo6hi0v8x2ggcpnx5w1l9i9cwh5z7qq7md7chog2wg701jjikyje79raoptwo2r8w3b3bi6c9wxnacinpgghwt23mgxfiud8pm3wyefej8o0w3yfpqw3bxw2cyvigxl6chz9ow1h153029fs4of4piq5dbsoll3olxddzof5cw2cmpcj0e6swg6ber6glgn5dbx1cu3mivk2fwqx8igk6yoptpbuzct0usfxn2u1pp53umbjcow459dynn4opi2uak5fj763dtod0mk75quj507pqds2gwb2qs1vjbug3ry68k0w4nsqp9ai5j8lrm1tbudcsvqpk6io5tl4tti8ech3suhz9cb6jtiz65ujofwwounvd2j29uzz4xjru5m4keb3v6dicrimn4an6g1647e6znbra70y70br8s6o7fpgm1n30qtco1a3hllapb4hb51uoov4pgausylwn6dl9h82817jheb3jk2958qcgxkexey9jpkvfsbynfm400h2zg8hpoqv9xt7s0db4s5ag0um1i4zonvn8y555x6inbsmg6ixlfx7qss63x7f6jwu7pz1vsqm4skn2fzlb0lmr3xgmack0397hv08aha49e61v82xlkuqusljwrzma94akje67xdzehceamwlrhabpnh4zvj7mgae0uh0uknlz1l59jz2z01j75bjtfdmlc0ppioy2t3ksp1y4xfqwmp8ym61en7tm1lbx5475ydv4v3ia5ngypgz1cwshozp6sgo84rqz0xftpm8iygt91ilcvejzhtx07yforsdf1s2a6ia1tplu35evobbwv9iv9hu79xip24pl7k4kmjmrc6r3t1epliw7wdkebiglrl9dd6r7opr53dbu9prfu09tch5anfw753g1vo4496ppvv2tvmzvesswsdyvq0esvgrq5pi3wqa9fyiawnt8qty58dlv55cmxft9rcqdm0bwsjnp2eab78iksfzqgajs0y9a9ek2ongise6nd98v10ci5hz62piwo1sbgpbvxy1477r3a07zvimhe3gj21kh1plceg4vfwha01c02ak3c3j8shmo06ycluvmy2ho2be4yk02uwk49oavkkhhfpf3y9gwqszad2h1dab05n6pda3dpe6h0fw5k558mndg0udu0urmnyjr3wh69zokx1lvioopa9fjaz9oeuiaj1xi4abgrl6wy2ik6mfvz16rih92bwfwhau7y6b138vouvp9woyj9cyl3a64e416r87ic38d8lzkbjbgu54omjhjwczsqm75k9xvpdbrsfjotydcw0yey2skpl938k1wvvtyadcmbq9lxsgtgura68qfqv1p21mex6pr1ss4w1581ux == \d\h\q\h\a\i\q\4\r\d\f\j\j\q\x\3\d\x\a\4\0\5\z\i\l\m\t\5\j\c\5\b\s\b\z\b\y\6\3\6\u\n\o\0\b\m\k\c\p\g\0\f\7\y\x\7\o\h\b\h\8\q\b\9\2\9\r\i\l\3\v\5\3\k\n\y\a\p\o\n\v\v\1\j\i\j\c\q\l\d\d\e\a\q\l\g\v\9\m\u\a\b\z\1\n\4\y\5\8\3\r\k\8\m\u\2\c\z\j\x\1\e\p\5\4\7\x\f\y\a\b\q\4\g\s\j\c\h\e\6\c\6\4\u\j\7\m\u\9\q\j\h\u\v\8\g\3\z\h\r\r\9\p\e\9\y\6\0\b\t\n\4\g\b\q\q\4\7\d\h\v\0\j\n\4\c\8\v\a\h\n\0\l\0\b\o\4\8\4\m\v\y\a\n\9\n\r\u\u\q\6\l\h\l\m\o\t\2\0\m\r\p\n\a\2\v\1\m\f\0\t\o\b\r\p\h\a\f\n\t\e\j\p\v\h\p\p\y\c\7\4\v\j\5\b\h\n\w\d\x\7\n\w\x\2\s\9\4\g\1\n\3\c\v\e\p\n\1\i\y\r\u\b\7\e\t\j\c\s\y\3\x\6\n\c\y\e\j\u\n\t\e\c\d\2\c\c\i\j\b\6\4\i\7\o\i\r\s\u\b\i\z\q\i\u\v\u\6\d\u\q\h\3\f\y\u\b\m\j\l\e\m\n\2\m\7\l\h\1\u\3\u\d\z\b\6\7\7\k\l\1\a\w\o\x\8\f\w\z\3\v\w\9\z\a\s\v\8\m\u\x\p\3\t\i\7\n\s\u\w\5\5\l\y\4\o\e\6\3\x\j\l\b\0\j\g\r\u\k\3\j\s\e\5\1\u\x\j\x\h\d\r\t\c\r\b\6\o\x\b\w\y\o\m\r\u\b\h\w\n\u\9\t\z\s\a\f\1\k\4\b\p\o\x\0\9\f\e\9\f\6\z\8\f\s\f\o\l\0\8\5\2\6\o\l\r\1\8\e\h\o\s\m\w\l\0\e\0\m\5\6\i\7\f\b\h\b\1\0\s\3\9\c\o\n\5\v\n\e\j\s\f\b\d\b\x\8\n\b\4\4\s\w\d\o\1\c\5\c\s\i\w\x\d\o\o\t\7\5\p\4\c\i\v\k\h\h\h\g\n\l\i\a\n\k\y\9\q\2\x\5\r\z\h\d\3\j\6\m\y\7\u\j\w\4\u\4\y\u\n\b\8\2\x\i\f\u\w\u\h\a\v\s\3\7\n\c\a\s\f\n\a\8\j\t\9\u\z\w\n\g\y\g\m\p\y\5\m\u\i\f\g\p\7\p\v\y\p\k\8\6\0\8\s\n\w\6\y\p\h\0\d\2\e\u\g\0\x\t\k\o\g\a\n\q\l\1\1\y\h\j\c\g\u\y\t\5\n\5\k\o\2\u\f\2\d\7\j\5\h\2\4\k\4\e\x\p\m\q\x\0\h\b\h\4\g\1\q\r\r\9\v\v\p\x\j\h\s\t\d\f\u\f\p\t\5\l\6\3\2\0\g\j\2\1\a\l\t\n\u\s\k\g\y\0\4\n\h\e\4\5\n\d\g\s\9\6\f\9\0\6\d\u\h\j\f\h\3\9\g\f\7\z\f\m\z\j\5\x\6\z\8\g\o\8\a\2\f\r\j\p\e\7\m\b\m\1\9\l\9\a\o\t\n\1\u\3\s\l\5\n\d\7\0\v\c\e\g\q\r\l\g\3\4\l\u\a\9\b\w\u\j\h\4\e\5\p\6\l\2\q\1\s\h\3\b\6\r\8\t\6\x\s\f\v\o\d\r\1\y\n\k\k\m\u\0\2\3\d\1\2\o\g\n\x\m\v\n\n\d\f\z\g\0\q\o\l\k\a\9\k\9\4\q\3\1\6\r\w\7\l\0\w\p\l\5\d\x\e\s\x\b\t\d\r\2\m\0\5\j\m\g\j\2\k\x\w\r\z\q\o\b\3\1\c\1\g\m\m\w\7\t\3\s\l\o\0\0\m\6\k\o\u\r\q\v\f\8\t\b\p\6\h\1\u\z\a\w\x\p\b\x\v\u\y\6\v\f\p\m\r\e\g\7\d\3\y\2\v\2\8\2\4\m\3\5\g\i\3\8\m\k\p\2\i\9\7\e\0\5\l\w\f\f\y\g\y\h\6\6\z\y\j\n\z\7\y\3\8\b\4\3\w\i\c\t\k\q\u\3\v\o\j\v\o\b\1\i\l\s\k\a\y\7\c\b\9\1\d\y\7\0\e\p\q\g\e\f\6\2\p\f\e\n\i\q\s\a\8\f\d\o\e\j\p\j\k\h\b\z\y\v\t\y\g\z\n\g\3\4\9\0\v\m\s\w\8\y\m\6\f\l\m\7\b\d\d\m\w\o\6\u\o\5\z\t\b\a\0\y\z\r\1\d\3\h\z\t\n\p\w\9\o\b\v\l\4\u\4\0\a\y\e\m\y\g\h\m\9\4\2\9\h\d\1\m\5\m\n\1\5\7\i\y\o\5\1\n\8\o\v\5\t\1\0\g\2\8\i\g\z\8\r\v\4\a\o\o\e\a\k\n\i\w\0\y\5\x\o\g\t\r\d\f\b\2\3\l\s\c\g\s\k\l\s\y\n\1\n\w\w\9\8\r\0\y\9\g\6\r\d\z\x\y\q\p\l\k\9\7\g\6\8\e\g\e\p\r\s\g\5\d\0\l\6\7\s\2\3\t\a\w\i\s\w\u\r\z\e\b\d\0\s\7\e\1\p\v\m\t\9\h\w\y\c\j\o\9\g\1\n\b\2\2\d\n\q\q\3\o\k\k\m\7\f\7\p\x\7\b\0\c\j\u\9\m\m\i\h\v\d\j\b\5\3\c\n\u\m\u\3\9\z\x\5\y\c\4\2\y\t\s\7\t\4\b\3\8\w\3\v\s\h\x\w\x\0\4\p\f\1\n\v\7\1\r\u\l\j\4\c\5\y\x\t\s\g\p\l\z\f\0\0\l\a\m\k\y\1\4\5\8\k\0\n\6\g\r\l\j\d\l\z\h\h\e\k\y\f\t\j\1\v\w\9\u\b\2\9\m\1\t\s\m\8\u\8\b\g\t\j\v\z\w\4\w\e\1\v\7\v\l\g\d\1\b\w\0\7\2\a\7\d\i\l\v\a\w\u\k\h\1\j\z\x\2\9\i\f\t\o\3\b\5\t\1\s\l\u\h\l\g\a\n\2\h\2\l\2\z\s\l\6\e\4\k\1\t\o\t\i\7\1\k\l\j\a\8\t\a\y\5\3\r\n\m\z\f\i\7\7\8\e\g\g\g\x\d\3\f\x\7\0\h\r\2\4\0\a\4\k\v\v\k\p\3\l\m\k\x\p\c\s\u\v\v\t\g\u\d\9\d\9\a\3\n\v\w\w\n\5\7\4\m\c\2\y\b\o\n\r\y\c\u\c\h\k\6\g\l\z\3\l\j\z\q\m\e\k\h\8\o\l\e\5\p\o\c\v\w\a\w\y\7\s\0\1\o\g\x\4\3\z\o\e\l\u\s\c\7\l\e\0\l\z\c\a\i\p\6\5\o\x\z\e\3\h\h\p\q\u\c\8\i\p\i\8\6\b\w\g\q\x\g\e\5\g\z\y\k\v\g\4\e\d\j\f\l\6\z\p\7\e\k\f\l\m\d\h\1\p\1\7\j\t\j\3\q\i\s\g\1\r\1\g\v\p\s\4\l\y\s\v\c\h\y\q\2\f\z\n\z\m\g\d\5\x\d\3\e\w\t\q\1\i\l\j\d\c\j\u\s\8\5\d\w\s\y\2\v\7\x\b\8\q\0\w\w\k\y\c\4\g\9\v\e\o\k\n\7\0\9\7\4\x\n\x\j\3\c\z\y\x\k\e\y\l\q\n\m\p\a\0\z\s\u\j\7\m\e\2\u\r\g\w\t\j\6\7\m\x\f\9\g\2\i\d\m\u\s\j\4\f\1\o\a\u\v\6\g\4\y\j\9\d\n\u\9\q\e\7\s\l\c\t\7\x\q\a\s\m\i\z\h\e\j\f\e\f\z\y\3\j\0\p\0\7\2\c\p\u\1\3\6\s\x\j\7\u\v\j\t\9\s\5\2\r\a\v\4\p\l\5\z\6\s\9\m\7\4\9\b\s\b\w\b\e\2\e\6\6\1\1\y\p\p\k\n\z\a\f\8\2\7\h\7\a\0\2\k\3\l\9\5\k\t\w\g\y\7\o\9\0\9\x\e\n\l\k\w\0\c\1\u\h\5\k\q\j\m\1\m\n\z\5\g\x\8\y\f\f\f\w\s\s\x\9\m\c\f\i\c\7\8\m\6\8\3\2\l\z\4\g\y\7\1\o\p\c\o\t\g\a\f\v\d\c\3\k\1\5\l\k\6\t\4\o\v\x\1\j\7\w\o\t\e\x\j\1\g\9\w\d\e\b\s\h\f\j\p\v\l\6\r\k\8\g\s\b\i\j\e\r\2\z\5\7\6\x\4\5\2\w\t\8\n\z\z\t\e\4\9\0\a\p\m\6\k\d\l\0\n\u\a\m\b\f\y\w\d\t\1\b\l\0\u\u\x\d\a\8\9\a\r\w\n\k\c\r\0\8\o\8\x\x\g\x\e\n\j\2\s\f\n\n\y\4\5\0\b\u\z\1\q\5\j\h\6\k\g\j\8\5\2\k\a\r\g\a\8\l\f\g\z\r\3\g\b\b\9\m\o\6\d\v\q\u\g\b\e\b\n\z\p\z\b\b\f\i\d\p\n\z\c\u\x\8\r\4\4\1\n\p\1\3\m\q\9\j\3\j\i\9\g\w\x\5\3\2\x\4\f\f\s\b\p\t\a\e\w\t\f\1\k\h\6\4\e\b\y\b\a\n\c\e\z\x\r\r\u\2\6\h\x\m\k\7\o\o\0\p\1\m\1\d\i\x\l\8\w\i\x\i\1\9\m\k\f\1\0\z\b\6\x\s\0\6\a\8\5\f\b\6\y\t\b\u\3\v\a\8\x\5\p\a\s\k\s\u\6\w\s\l\x\t\d\h\k\r\q\f\b\c\y\s\d\8\u\g\x\s\u\d\h\p\m\t\k\2\4\d\w\w\t\3\5\7\k\1\v\i\2\v\p\2\v\n\y\1\r\4\p\d\u\e\2\q\l\a\9\d\d\m\w\n\s\7\i\k\8\m\9\b\w\u\3\u\f\6\l\j\3\r\6\r\i\p\z\j\5\m\q\k\i\i\v\c\u\7\7\z\z\x\k\r\h\p\2\4\i\a\y\r\2\j\l\e\m\3\3\t\4\7\i\c\y\i\f\s\5\c\2\i\s\b\0\3\f\n\u\z\p\u\h\p\7\l\z\p\h\v\i\0\e\q\s\r\0\k\0\l\v\c\o\r\3\l\g\1\o\m\6\i\3\2\7\1\j\s\5\s\w\x\7\1\2\g\y\h\f\w\9\d\p\9\c\1\r\l\k\k\4\e\m\b\6\1\g\2\z\a\8\8\7\g\n\n\7\g\1\4\o\c\a\z\2\1\2\n\g\y\f\l\h\z\i\6\b\m\3\0\1\4\i\w\m\4\d\h\e\h\s\l\z\3\l\m\g\1\v\r\y\e\n\l\u\t\q\t\z\q\8\0\4\d\4\q\f\f\q\y\z\v\x\3\v\q\y\9\i\9\f\4\y\i\u\5\p\i\5\q\j\h\h\b\l\x\x\h\a\7\o\w\9\y\8\y\k\4\0\1\0\g\f\o\q\m\2\u\q\l\l\5\1\n\r\s\m\v\x\u\x\k\0\i\q\b\5\6\y\r\3\f\l\r\a\y\q\v\p\z\7\c\8\8\3\l\q\i\d\n\b\2\f\v\7\t\l\y\j\6\p\j\7\5\z\1\1\j\k\2\q\r\b\9\t\a\j\p\4\l\0\e\h\j\s\u\n\f\h\z\e\y\2\m\v\u\o\6\h\i\0\v\8\x\2\g\g\c\p\n\x\5\w\1\l\9\i\9\c\w\h\5\z\7\q\q\7\m\d\7\c\h\o\g\2\w\g\7\0\1\j\j\i\k\y\j\e\7\9\r\a\o\p\t\w\o\2\r\8\w\3\b\3\b\i\6\c\9\w\x\n\a\c\i\n\p\g\g\h\w\t\2\3\m\g\x\f\i\u\d\8\p\m\3\w\y\e\f\e\j\8\o\0\w\3\y\f\p\q\w\3\b\x\w\2\c\y\v\i\g\x\l\6\c\h\z\9\o\w\1\h\1\5\3\0\2\9\f\s\4\o\f\4\p\i\q\5\d\b\s\o\l\l\3\o\l\x\d\d\z\o\f\5\c\w\2\c\m\p\c\j\0\e\6\s\w\g\6\b\e\r\6\g\l\g\n\5\d\b\x\1\c\u\3\m\i\v\k\2\f\w\q\x\8\i\g\k\6\y\o\p\t\p\b\u\z\c\t\0\u\s\f\x\n\2\u\1\p\p\5\3\u\m\b\j\c\o\w\4\5\9\d\y\n\n\4\o\p\i\2\u\a\k\5\f\j\7\6\3\d\t\o\d\0\m\k\7\5\q\u\j\5\0\7\p\q\d\s\2\g\w\b\2\q\s\1\v\j\b\u\g\3\r\y\6\8\k\0\w\4\n\s\q\p\9\a\i\5\j\8\l\r\m\1\t\b\u\d\c\s\v\q\p\k\6\i\o\5\t\l\4\t\t\i\8\e\c\h\3\s\u\h\z\9\c\b\6\j\t\i\z\6\5\u\j\o\f\w\w\o\u\n\v\d\2\j\2\9\u\z\z\4\x\j\r\u\5\m\4\k\e\b\3\v\6\d\i\c\r\i\m\n\4\a\n\6\g\1\6\4\7\e\6\z\n\b\r\a\7\0\y\7\0\b\r\8\s\6\o\7\f\p\g\m\1\n\3\0\q\t\c\o\1\a\3\h\l\l\a\p\b\4\h\b\5\1\u\o\o\v\4\p\g\a\u\s\y\l\w\n\6\d\l\9\h\8\2\8\1\7\j\h\e\b\3\j\k\2\9\5\8\q\c\g\x\k\e\x\e\y\9\j\p\k\v\f\s\b\y\n\f\m\4\0\0\h\2\z\g\8\h\p\o\q\v\9\x\t\7\s\0\d\b\4\s\5\a\g\0\u\m\1\i\4\z\o\n\v\n\8\y\5\5\5\x\6\i\n\b\s\m\g\6\i\x\l\f\x\7\q\s\s\6\3\x\7\f\6\j\w\u\7\p\z\1\v\s\q\m\4\s\k\n\2\f\z\l\b\0\l\m\r\3\x\g\m\a\c\k\0\3\9\7\h\v\0\8\a\h\a\4\9\e\6\1\v\8\2\x\l\k\u\q\u\s\l\j\w\r\z\m\a\9\4\a\k\j\e\6\7\x\d\z\e\h\c\e\a\m\w\l\r\h\a\b\p\n\h\4\z\v\j\7\m\g\a\e\0\u\h\0\u\k\n\l\z\1\l\5\9\j\z\2\z\0\1\j\7\5\b\j\t\f\d\m\l\c\0\p\p\i\o\y\2\t\3\k\s\p\1\y\4\x\f\q\w\m\p\8\y\m\6\1\e\n\7\t\m\1\l\b\x\5\4\7\5\y\d\v\4\v\3\i\a\5\n\g\y\p\g\z\1\c\w\s\h\o\z\p\6\s\g\o\8\4\r\q\z\0\x\f\t\p\m\8\i\y\g\t\9\1\i\l\c\v\e\j\z\h\t\x\0\7\y\f\o\r\s\d\f\1\s\2\a\6\i\a\1\t\p\l\u\3\5\e\v\o\b\b\w\v\9\i\v\9\h\u\7\9\x\i\p\2\4\p\l\7\k\4\k\m\j\m\r\c\6\r\3\t\1\e\p\l\i\w\7\w\d\k\e\b\i\g\l\r\l\9\d\d\6\r\7\o\p\r\5\3\d\b\u\9\p\r\f\u\0\9\t\c\h\5\a\n\f\w\7\5\3\g\1\v\o\4\4\9\6\p\p\v\v\2\t\v\m\z\v\e\s\s\w\s\d\y\v\q\0\e\s\v\g\r\q\5\p\i\3\w\q\a\9\f\y\i\a\w\n\t\8\q\t\y\5\8\d\l\v\5\5\c\m\x\f\t\9\r\c\q\d\m\0\b\w\s\j\n\p\2\e\a\b\7\8\i\k\s\f\z\q\g\a\j\s\0\y\9\a\9\e\k\2\o\n\g\i\s\e\6\n\d\9\8\v\1\0\c\i\5\h\z\6\2\p\i\w\o\1\s\b\g\p\b\v\x\y\1\4\7\7\r\3\a\0\7\z\v\i\m\h\e\3\g\j\2\1\k\h\1\p\l\c\e\g\4\v\f\w\h\a\0\1\c\0\2\a\k\3\c\3\j\8\s\h\m\o\0\6\y\c\l\u\v\m\y\2\h\o\2\b\e\4\y\k\0\2\u\w\k\4\9\o\a\v\k\k\h\h\f\p\f\3\y\9\g\w\q\s\z\a\d\2\h\1\d\a\b\0\5\n\6\p\d\a\3\d\p\e\6\h\0\f\w\5\k\5\5\8\m\n\d\g\0\u\d\u\0\u\r\m\n\y\j\r\3\w\h\6\9\z\o\k\x\1\l\v\i\o\o\p\a\9\f\j\a\z\9\o\e\u\i\a\j\1\x\i\4\a\b\g\r\l\6\w\y\2\i\k\6\m\f\v\z\1\6\r\i\h\9\2\b\w\f\w\h\a\u\7\y\6\b\1\3\8\v\o\u\v\p\9\w\o\y\j\9\c\y\l\3\a\6\4\e\4\1\6\r\8\7\i\c\3\8\d\8\l\z\k\b\j\b\g\u\5\4\o\m\j\h\j\w\c\z\s\q\m\7\5\k\9\x\v\p\d\b\r\s\f\j\o\t\y\d\c\w\0\y\e\y\2\s\k\p\l\9\3\8\k\1\w\v\v\t\y\a\d\c\m\b\q\9\l\x\s\g\t\g\u\r\a\6\8\q\f\q\v\1\p\2\1\m\e\x\6\p\r\1\s\s\4\w\1\5\8\1\u\x ]] 00:08:16.539 00:08:16.539 real 0m1.509s 00:08:16.539 user 0m1.059s 00:08:16.539 sys 0m0.625s 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:16.539 13:42:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.797 [2024-10-01 13:42:26.757048] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:16.797 [2024-10-01 13:42:26.757146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ] 00:08:16.797 { 00:08:16.797 "subsystems": [ 00:08:16.797 { 00:08:16.797 "subsystem": "bdev", 00:08:16.797 "config": [ 00:08:16.797 { 00:08:16.797 "params": { 00:08:16.797 "trtype": "pcie", 00:08:16.797 "traddr": "0000:00:10.0", 00:08:16.797 "name": "Nvme0" 00:08:16.797 }, 00:08:16.797 "method": "bdev_nvme_attach_controller" 00:08:16.797 }, 00:08:16.797 { 00:08:16.797 "method": "bdev_wait_for_examine" 00:08:16.797 } 00:08:16.797 ] 00:08:16.797 } 00:08:16.797 ] 00:08:16.797 } 00:08:16.797 [2024-10-01 13:42:26.891512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.055 [2024-10-01 13:42:27.012150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.055 [2024-10-01 13:42:27.068744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.313  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:17.313 00:08:17.313 13:42:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.313 ************************************ 00:08:17.313 END TEST spdk_dd_basic_rw 00:08:17.313 ************************************ 00:08:17.313 00:08:17.313 real 0m20.743s 00:08:17.313 user 0m15.246s 00:08:17.313 sys 0m7.158s 00:08:17.313 13:42:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.313 13:42:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.313 13:42:27 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:17.313 13:42:27 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.313 13:42:27 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.313 13:42:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:17.313 ************************************ 00:08:17.313 START TEST spdk_dd_posix 00:08:17.313 ************************************ 00:08:17.313 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:17.572 * Looking for test storage... 00:08:17.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.572 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.573 --rc genhtml_branch_coverage=1 00:08:17.573 --rc genhtml_function_coverage=1 00:08:17.573 --rc genhtml_legend=1 00:08:17.573 --rc geninfo_all_blocks=1 00:08:17.573 --rc geninfo_unexecuted_blocks=1 00:08:17.573 00:08:17.573 ' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.573 --rc genhtml_branch_coverage=1 00:08:17.573 --rc genhtml_function_coverage=1 00:08:17.573 --rc genhtml_legend=1 00:08:17.573 --rc geninfo_all_blocks=1 00:08:17.573 --rc geninfo_unexecuted_blocks=1 00:08:17.573 00:08:17.573 ' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.573 --rc genhtml_branch_coverage=1 00:08:17.573 --rc genhtml_function_coverage=1 00:08:17.573 --rc genhtml_legend=1 00:08:17.573 --rc geninfo_all_blocks=1 00:08:17.573 --rc geninfo_unexecuted_blocks=1 00:08:17.573 00:08:17.573 ' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.573 --rc genhtml_branch_coverage=1 00:08:17.573 --rc genhtml_function_coverage=1 00:08:17.573 --rc genhtml_legend=1 00:08:17.573 --rc geninfo_all_blocks=1 00:08:17.573 --rc geninfo_unexecuted_blocks=1 00:08:17.573 00:08:17.573 ' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:17.573 * First test run, liburing in use 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:17.573 ************************************ 00:08:17.573 START TEST dd_flag_append 00:08:17.573 ************************************ 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=fosqls38rrv59zdak72jvvaye4z0zaxk 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=xpv33zzsvfczfco8axm8v3rntdmzccmh 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s fosqls38rrv59zdak72jvvaye4z0zaxk 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s xpv33zzsvfczfco8axm8v3rntdmzccmh 00:08:17.573 13:42:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:17.573 [2024-10-01 13:42:27.708587] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:17.573 [2024-10-01 13:42:27.708700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:08:17.832 [2024-10-01 13:42:27.843428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.832 [2024-10-01 13:42:27.963740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.090 [2024-10-01 13:42:28.016731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.347  Copying: 32/32 [B] (average 31 kBps) 00:08:18.347 00:08:18.347 ************************************ 00:08:18.347 END TEST dd_flag_append 00:08:18.347 ************************************ 00:08:18.347 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ xpv33zzsvfczfco8axm8v3rntdmzccmhfosqls38rrv59zdak72jvvaye4z0zaxk == \x\p\v\3\3\z\z\s\v\f\c\z\f\c\o\8\a\x\m\8\v\3\r\n\t\d\m\z\c\c\m\h\f\o\s\q\l\s\3\8\r\r\v\5\9\z\d\a\k\7\2\j\v\v\a\y\e\4\z\0\z\a\x\k ]] 00:08:18.347 00:08:18.347 real 0m0.637s 00:08:18.347 user 0m0.376s 00:08:18.347 sys 0m0.279s 00:08:18.347 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.348 ************************************ 00:08:18.348 START TEST dd_flag_directory 00:08:18.348 ************************************ 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.348 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.348 [2024-10-01 13:42:28.387929] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:18.348 [2024-10-01 13:42:28.388028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60444 ] 00:08:18.606 [2024-10-01 13:42:28.526438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.606 [2024-10-01 13:42:28.660480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.606 [2024-10-01 13:42:28.717096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.606 [2024-10-01 13:42:28.756863] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.606 [2024-10-01 13:42:28.756961] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.606 [2024-10-01 13:42:28.756982] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.864 [2024-10-01 13:42:28.876956] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.864 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:18.864 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.864 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:18.864 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.864 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:18.864 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.865 13:42:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.865 [2024-10-01 13:42:29.031669] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:18.865 [2024-10-01 13:42:29.031763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 00:08:19.121 [2024-10-01 13:42:29.165557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.121 [2024-10-01 13:42:29.288881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.379 [2024-10-01 13:42:29.342879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.379 [2024-10-01 13:42:29.379105] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.379 [2024-10-01 13:42:29.379162] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.379 [2024-10-01 13:42:29.379177] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.379 [2024-10-01 13:42:29.493441] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.638 ************************************ 00:08:19.638 END TEST dd_flag_directory 00:08:19.638 ************************************ 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.638 00:08:19.638 real 0m1.260s 00:08:19.638 user 0m0.750s 00:08:19.638 sys 0m0.296s 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:19.638 ************************************ 00:08:19.638 START TEST dd_flag_nofollow 00:08:19.638 ************************************ 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.638 13:42:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.638 [2024-10-01 13:42:29.714030] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:19.638 [2024-10-01 13:42:29.714141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60482 ] 00:08:19.901 [2024-10-01 13:42:29.854067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.901 [2024-10-01 13:42:30.000775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.181 [2024-10-01 13:42:30.077037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.181 [2024-10-01 13:42:30.125694] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:20.181 [2024-10-01 13:42:30.125772] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:20.181 [2024-10-01 13:42:30.125790] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.181 [2024-10-01 13:42:30.296167] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.440 13:42:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:20.440 [2024-10-01 13:42:30.494094] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:20.440 [2024-10-01 13:42:30.494434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60497 ] 00:08:20.699 [2024-10-01 13:42:30.632582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.699 [2024-10-01 13:42:30.779629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.699 [2024-10-01 13:42:30.855287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.957 [2024-10-01 13:42:30.902653] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:20.957 [2024-10-01 13:42:30.902991] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:20.957 [2024-10-01 13:42:30.903015] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.957 [2024-10-01 13:42:31.073855] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:21.215 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.215 [2024-10-01 13:42:31.271683] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:21.216 [2024-10-01 13:42:31.272014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60510 ] 00:08:21.474 [2024-10-01 13:42:31.410678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.474 [2024-10-01 13:42:31.557694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.474 [2024-10-01 13:42:31.633965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.991  Copying: 512/512 [B] (average 500 kBps) 00:08:21.991 00:08:21.991 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ s5jbylrjwupigwgo5yln0uj07v6d21dowjqmhh9ecw2aye7pggv7gvv65mw7fkydrufmxzn2k9wi0sf3u7e279yruspv0frtngzm0zs4rdsc8whqgjoh8c0bbv7nvpnb6wxtspdhlcmj92nu7q66aic07a9lk6fjs2lhz8agbkrjbemiz4ygez7kzabg7fun7ib350efzwswxotm7inprnzhbrb6aa1k5jjcl7x5p1w03rqzzbd5lw7nkd2vh7prsiyjhevgsazwnsti04p4qwkxdg1fpu6hud668tr742jbmb41uizvdxzasw5abubcqift1wzqyu8vw79nkvb4gncbonk74ibhwmx40emd5c4bhsnwygzkgo7dmbxbn1cgwsba7sog7oi1x7u5zu18mq14bc90fe38kgd40xhz05qyq4ynxnpa9qqynqlpn3rfjk99029273wehfqma19yqphr41vd23fmyvsbwut6pcz415jau1f1bey2urmwv83f == \s\5\j\b\y\l\r\j\w\u\p\i\g\w\g\o\5\y\l\n\0\u\j\0\7\v\6\d\2\1\d\o\w\j\q\m\h\h\9\e\c\w\2\a\y\e\7\p\g\g\v\7\g\v\v\6\5\m\w\7\f\k\y\d\r\u\f\m\x\z\n\2\k\9\w\i\0\s\f\3\u\7\e\2\7\9\y\r\u\s\p\v\0\f\r\t\n\g\z\m\0\z\s\4\r\d\s\c\8\w\h\q\g\j\o\h\8\c\0\b\b\v\7\n\v\p\n\b\6\w\x\t\s\p\d\h\l\c\m\j\9\2\n\u\7\q\6\6\a\i\c\0\7\a\9\l\k\6\f\j\s\2\l\h\z\8\a\g\b\k\r\j\b\e\m\i\z\4\y\g\e\z\7\k\z\a\b\g\7\f\u\n\7\i\b\3\5\0\e\f\z\w\s\w\x\o\t\m\7\i\n\p\r\n\z\h\b\r\b\6\a\a\1\k\5\j\j\c\l\7\x\5\p\1\w\0\3\r\q\z\z\b\d\5\l\w\7\n\k\d\2\v\h\7\p\r\s\i\y\j\h\e\v\g\s\a\z\w\n\s\t\i\0\4\p\4\q\w\k\x\d\g\1\f\p\u\6\h\u\d\6\6\8\t\r\7\4\2\j\b\m\b\4\1\u\i\z\v\d\x\z\a\s\w\5\a\b\u\b\c\q\i\f\t\1\w\z\q\y\u\8\v\w\7\9\n\k\v\b\4\g\n\c\b\o\n\k\7\4\i\b\h\w\m\x\4\0\e\m\d\5\c\4\b\h\s\n\w\y\g\z\k\g\o\7\d\m\b\x\b\n\1\c\g\w\s\b\a\7\s\o\g\7\o\i\1\x\7\u\5\z\u\1\8\m\q\1\4\b\c\9\0\f\e\3\8\k\g\d\4\0\x\h\z\0\5\q\y\q\4\y\n\x\n\p\a\9\q\q\y\n\q\l\p\n\3\r\f\j\k\9\9\0\2\9\2\7\3\w\e\h\f\q\m\a\1\9\y\q\p\h\r\4\1\v\d\2\3\f\m\y\v\s\b\w\u\t\6\p\c\z\4\1\5\j\a\u\1\f\1\b\e\y\2\u\r\m\w\v\8\3\f ]] 00:08:21.991 00:08:21.991 real 0m2.352s 00:08:21.991 user 0m1.434s 00:08:21.991 sys 0m0.776s 00:08:21.992 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.992 13:42:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:21.992 ************************************ 00:08:21.992 END TEST dd_flag_nofollow 00:08:21.992 ************************************ 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:21.992 ************************************ 00:08:21.992 START TEST dd_flag_noatime 00:08:21.992 ************************************ 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1727790151 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1727790151 00:08:21.992 13:42:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:22.957 13:42:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.215 [2024-10-01 13:42:33.165185] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:23.215 [2024-10-01 13:42:33.165743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60558 ] 00:08:23.215 [2024-10-01 13:42:33.312132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.473 [2024-10-01 13:42:33.446875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.473 [2024-10-01 13:42:33.501691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.731  Copying: 512/512 [B] (average 500 kBps) 00:08:23.731 00:08:23.731 13:42:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.731 13:42:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1727790151 )) 00:08:23.731 13:42:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.731 13:42:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1727790151 )) 00:08:23.731 13:42:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.731 [2024-10-01 13:42:33.841321] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:23.731 [2024-10-01 13:42:33.841439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60566 ] 00:08:23.990 [2024-10-01 13:42:33.977010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.990 [2024-10-01 13:42:34.119482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.248 [2024-10-01 13:42:34.179380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.505  Copying: 512/512 [B] (average 500 kBps) 00:08:24.505 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.505 ************************************ 00:08:24.505 END TEST dd_flag_noatime 00:08:24.505 ************************************ 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1727790154 )) 00:08:24.505 00:08:24.505 real 0m2.561s 00:08:24.505 user 0m0.970s 00:08:24.505 sys 0m0.717s 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 ************************************ 00:08:24.505 START TEST dd_flags_misc 00:08:24.505 ************************************ 00:08:24.505 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.506 13:42:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:24.763 [2024-10-01 13:42:34.729338] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:24.763 [2024-10-01 13:42:34.729473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60600 ] 00:08:24.763 [2024-10-01 13:42:34.868623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.021 [2024-10-01 13:42:35.066408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.021 [2024-10-01 13:42:35.157287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.536  Copying: 512/512 [B] (average 500 kBps) 00:08:25.536 00:08:25.536 13:42:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nk6ddoxqsbi2z67updqtbsylo7b7bwzdajycye7tdlfvr2nbx4fu416f5jc06irtju250rkzisqb6sve8r4vtzdstv3sns5fuilhsd4b8bi4tf2ewtkbmn9skk7q3xawluy3w910yldssqarxuvgu2a4s6ywkr9wnsjazuoampmys3zce5h35bo0psq025n8yk079xmovs3hjkezmuz4mz93wo1h0tjfgbyatts1v0hqxdmkjy5o16nebcyimx9n7rj58nfifpkv7ynoo60cbqlw5un3pjt2e1i22ef5s8g7glczmp2cbfqoeeu6164evlufap9jpwej89whbtgfeccunhpibo5zrgx3wbjn537856b0jo1xvrrm4xs0ogndxu4gmamwn2zx33jartvnmg3grgtsf6trk03a0o85um3otxozpz4qv2spz2sw01gicazfzxphq3lk0s1cnhpxxfjc0rjhhllqouwlomaaclt6nmpcymyrcz1goj9xc6p4 == \n\k\6\d\d\o\x\q\s\b\i\2\z\6\7\u\p\d\q\t\b\s\y\l\o\7\b\7\b\w\z\d\a\j\y\c\y\e\7\t\d\l\f\v\r\2\n\b\x\4\f\u\4\1\6\f\5\j\c\0\6\i\r\t\j\u\2\5\0\r\k\z\i\s\q\b\6\s\v\e\8\r\4\v\t\z\d\s\t\v\3\s\n\s\5\f\u\i\l\h\s\d\4\b\8\b\i\4\t\f\2\e\w\t\k\b\m\n\9\s\k\k\7\q\3\x\a\w\l\u\y\3\w\9\1\0\y\l\d\s\s\q\a\r\x\u\v\g\u\2\a\4\s\6\y\w\k\r\9\w\n\s\j\a\z\u\o\a\m\p\m\y\s\3\z\c\e\5\h\3\5\b\o\0\p\s\q\0\2\5\n\8\y\k\0\7\9\x\m\o\v\s\3\h\j\k\e\z\m\u\z\4\m\z\9\3\w\o\1\h\0\t\j\f\g\b\y\a\t\t\s\1\v\0\h\q\x\d\m\k\j\y\5\o\1\6\n\e\b\c\y\i\m\x\9\n\7\r\j\5\8\n\f\i\f\p\k\v\7\y\n\o\o\6\0\c\b\q\l\w\5\u\n\3\p\j\t\2\e\1\i\2\2\e\f\5\s\8\g\7\g\l\c\z\m\p\2\c\b\f\q\o\e\e\u\6\1\6\4\e\v\l\u\f\a\p\9\j\p\w\e\j\8\9\w\h\b\t\g\f\e\c\c\u\n\h\p\i\b\o\5\z\r\g\x\3\w\b\j\n\5\3\7\8\5\6\b\0\j\o\1\x\v\r\r\m\4\x\s\0\o\g\n\d\x\u\4\g\m\a\m\w\n\2\z\x\3\3\j\a\r\t\v\n\m\g\3\g\r\g\t\s\f\6\t\r\k\0\3\a\0\o\8\5\u\m\3\o\t\x\o\z\p\z\4\q\v\2\s\p\z\2\s\w\0\1\g\i\c\a\z\f\z\x\p\h\q\3\l\k\0\s\1\c\n\h\p\x\x\f\j\c\0\r\j\h\h\l\l\q\o\u\w\l\o\m\a\a\c\l\t\6\n\m\p\c\y\m\y\r\c\z\1\g\o\j\9\x\c\6\p\4 ]] 00:08:25.536 13:42:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.536 13:42:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:25.536 [2024-10-01 13:42:35.666411] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:25.536 [2024-10-01 13:42:35.666544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60615 ] 00:08:25.793 [2024-10-01 13:42:35.801799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.793 [2024-10-01 13:42:35.938663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.049 [2024-10-01 13:42:35.997162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.306  Copying: 512/512 [B] (average 500 kBps) 00:08:26.306 00:08:26.307 13:42:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nk6ddoxqsbi2z67updqtbsylo7b7bwzdajycye7tdlfvr2nbx4fu416f5jc06irtju250rkzisqb6sve8r4vtzdstv3sns5fuilhsd4b8bi4tf2ewtkbmn9skk7q3xawluy3w910yldssqarxuvgu2a4s6ywkr9wnsjazuoampmys3zce5h35bo0psq025n8yk079xmovs3hjkezmuz4mz93wo1h0tjfgbyatts1v0hqxdmkjy5o16nebcyimx9n7rj58nfifpkv7ynoo60cbqlw5un3pjt2e1i22ef5s8g7glczmp2cbfqoeeu6164evlufap9jpwej89whbtgfeccunhpibo5zrgx3wbjn537856b0jo1xvrrm4xs0ogndxu4gmamwn2zx33jartvnmg3grgtsf6trk03a0o85um3otxozpz4qv2spz2sw01gicazfzxphq3lk0s1cnhpxxfjc0rjhhllqouwlomaaclt6nmpcymyrcz1goj9xc6p4 == \n\k\6\d\d\o\x\q\s\b\i\2\z\6\7\u\p\d\q\t\b\s\y\l\o\7\b\7\b\w\z\d\a\j\y\c\y\e\7\t\d\l\f\v\r\2\n\b\x\4\f\u\4\1\6\f\5\j\c\0\6\i\r\t\j\u\2\5\0\r\k\z\i\s\q\b\6\s\v\e\8\r\4\v\t\z\d\s\t\v\3\s\n\s\5\f\u\i\l\h\s\d\4\b\8\b\i\4\t\f\2\e\w\t\k\b\m\n\9\s\k\k\7\q\3\x\a\w\l\u\y\3\w\9\1\0\y\l\d\s\s\q\a\r\x\u\v\g\u\2\a\4\s\6\y\w\k\r\9\w\n\s\j\a\z\u\o\a\m\p\m\y\s\3\z\c\e\5\h\3\5\b\o\0\p\s\q\0\2\5\n\8\y\k\0\7\9\x\m\o\v\s\3\h\j\k\e\z\m\u\z\4\m\z\9\3\w\o\1\h\0\t\j\f\g\b\y\a\t\t\s\1\v\0\h\q\x\d\m\k\j\y\5\o\1\6\n\e\b\c\y\i\m\x\9\n\7\r\j\5\8\n\f\i\f\p\k\v\7\y\n\o\o\6\0\c\b\q\l\w\5\u\n\3\p\j\t\2\e\1\i\2\2\e\f\5\s\8\g\7\g\l\c\z\m\p\2\c\b\f\q\o\e\e\u\6\1\6\4\e\v\l\u\f\a\p\9\j\p\w\e\j\8\9\w\h\b\t\g\f\e\c\c\u\n\h\p\i\b\o\5\z\r\g\x\3\w\b\j\n\5\3\7\8\5\6\b\0\j\o\1\x\v\r\r\m\4\x\s\0\o\g\n\d\x\u\4\g\m\a\m\w\n\2\z\x\3\3\j\a\r\t\v\n\m\g\3\g\r\g\t\s\f\6\t\r\k\0\3\a\0\o\8\5\u\m\3\o\t\x\o\z\p\z\4\q\v\2\s\p\z\2\s\w\0\1\g\i\c\a\z\f\z\x\p\h\q\3\l\k\0\s\1\c\n\h\p\x\x\f\j\c\0\r\j\h\h\l\l\q\o\u\w\l\o\m\a\a\c\l\t\6\n\m\p\c\y\m\y\r\c\z\1\g\o\j\9\x\c\6\p\4 ]] 00:08:26.307 13:42:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.307 13:42:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:26.307 [2024-10-01 13:42:36.316363] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:26.307 [2024-10-01 13:42:36.316708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60630 ] 00:08:26.307 [2024-10-01 13:42:36.449187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.564 [2024-10-01 13:42:36.570810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.564 [2024-10-01 13:42:36.626370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.822  Copying: 512/512 [B] (average 250 kBps) 00:08:26.822 00:08:26.822 13:42:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nk6ddoxqsbi2z67updqtbsylo7b7bwzdajycye7tdlfvr2nbx4fu416f5jc06irtju250rkzisqb6sve8r4vtzdstv3sns5fuilhsd4b8bi4tf2ewtkbmn9skk7q3xawluy3w910yldssqarxuvgu2a4s6ywkr9wnsjazuoampmys3zce5h35bo0psq025n8yk079xmovs3hjkezmuz4mz93wo1h0tjfgbyatts1v0hqxdmkjy5o16nebcyimx9n7rj58nfifpkv7ynoo60cbqlw5un3pjt2e1i22ef5s8g7glczmp2cbfqoeeu6164evlufap9jpwej89whbtgfeccunhpibo5zrgx3wbjn537856b0jo1xvrrm4xs0ogndxu4gmamwn2zx33jartvnmg3grgtsf6trk03a0o85um3otxozpz4qv2spz2sw01gicazfzxphq3lk0s1cnhpxxfjc0rjhhllqouwlomaaclt6nmpcymyrcz1goj9xc6p4 == \n\k\6\d\d\o\x\q\s\b\i\2\z\6\7\u\p\d\q\t\b\s\y\l\o\7\b\7\b\w\z\d\a\j\y\c\y\e\7\t\d\l\f\v\r\2\n\b\x\4\f\u\4\1\6\f\5\j\c\0\6\i\r\t\j\u\2\5\0\r\k\z\i\s\q\b\6\s\v\e\8\r\4\v\t\z\d\s\t\v\3\s\n\s\5\f\u\i\l\h\s\d\4\b\8\b\i\4\t\f\2\e\w\t\k\b\m\n\9\s\k\k\7\q\3\x\a\w\l\u\y\3\w\9\1\0\y\l\d\s\s\q\a\r\x\u\v\g\u\2\a\4\s\6\y\w\k\r\9\w\n\s\j\a\z\u\o\a\m\p\m\y\s\3\z\c\e\5\h\3\5\b\o\0\p\s\q\0\2\5\n\8\y\k\0\7\9\x\m\o\v\s\3\h\j\k\e\z\m\u\z\4\m\z\9\3\w\o\1\h\0\t\j\f\g\b\y\a\t\t\s\1\v\0\h\q\x\d\m\k\j\y\5\o\1\6\n\e\b\c\y\i\m\x\9\n\7\r\j\5\8\n\f\i\f\p\k\v\7\y\n\o\o\6\0\c\b\q\l\w\5\u\n\3\p\j\t\2\e\1\i\2\2\e\f\5\s\8\g\7\g\l\c\z\m\p\2\c\b\f\q\o\e\e\u\6\1\6\4\e\v\l\u\f\a\p\9\j\p\w\e\j\8\9\w\h\b\t\g\f\e\c\c\u\n\h\p\i\b\o\5\z\r\g\x\3\w\b\j\n\5\3\7\8\5\6\b\0\j\o\1\x\v\r\r\m\4\x\s\0\o\g\n\d\x\u\4\g\m\a\m\w\n\2\z\x\3\3\j\a\r\t\v\n\m\g\3\g\r\g\t\s\f\6\t\r\k\0\3\a\0\o\8\5\u\m\3\o\t\x\o\z\p\z\4\q\v\2\s\p\z\2\s\w\0\1\g\i\c\a\z\f\z\x\p\h\q\3\l\k\0\s\1\c\n\h\p\x\x\f\j\c\0\r\j\h\h\l\l\q\o\u\w\l\o\m\a\a\c\l\t\6\n\m\p\c\y\m\y\r\c\z\1\g\o\j\9\x\c\6\p\4 ]] 00:08:26.822 13:42:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.822 13:42:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:26.822 [2024-10-01 13:42:36.959469] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:26.822 [2024-10-01 13:42:36.959589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60634 ] 00:08:27.079 [2024-10-01 13:42:37.098994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.079 [2024-10-01 13:42:37.220901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.336 [2024-10-01 13:42:37.274172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.594  Copying: 512/512 [B] (average 250 kBps) 00:08:27.594 00:08:27.594 13:42:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nk6ddoxqsbi2z67updqtbsylo7b7bwzdajycye7tdlfvr2nbx4fu416f5jc06irtju250rkzisqb6sve8r4vtzdstv3sns5fuilhsd4b8bi4tf2ewtkbmn9skk7q3xawluy3w910yldssqarxuvgu2a4s6ywkr9wnsjazuoampmys3zce5h35bo0psq025n8yk079xmovs3hjkezmuz4mz93wo1h0tjfgbyatts1v0hqxdmkjy5o16nebcyimx9n7rj58nfifpkv7ynoo60cbqlw5un3pjt2e1i22ef5s8g7glczmp2cbfqoeeu6164evlufap9jpwej89whbtgfeccunhpibo5zrgx3wbjn537856b0jo1xvrrm4xs0ogndxu4gmamwn2zx33jartvnmg3grgtsf6trk03a0o85um3otxozpz4qv2spz2sw01gicazfzxphq3lk0s1cnhpxxfjc0rjhhllqouwlomaaclt6nmpcymyrcz1goj9xc6p4 == \n\k\6\d\d\o\x\q\s\b\i\2\z\6\7\u\p\d\q\t\b\s\y\l\o\7\b\7\b\w\z\d\a\j\y\c\y\e\7\t\d\l\f\v\r\2\n\b\x\4\f\u\4\1\6\f\5\j\c\0\6\i\r\t\j\u\2\5\0\r\k\z\i\s\q\b\6\s\v\e\8\r\4\v\t\z\d\s\t\v\3\s\n\s\5\f\u\i\l\h\s\d\4\b\8\b\i\4\t\f\2\e\w\t\k\b\m\n\9\s\k\k\7\q\3\x\a\w\l\u\y\3\w\9\1\0\y\l\d\s\s\q\a\r\x\u\v\g\u\2\a\4\s\6\y\w\k\r\9\w\n\s\j\a\z\u\o\a\m\p\m\y\s\3\z\c\e\5\h\3\5\b\o\0\p\s\q\0\2\5\n\8\y\k\0\7\9\x\m\o\v\s\3\h\j\k\e\z\m\u\z\4\m\z\9\3\w\o\1\h\0\t\j\f\g\b\y\a\t\t\s\1\v\0\h\q\x\d\m\k\j\y\5\o\1\6\n\e\b\c\y\i\m\x\9\n\7\r\j\5\8\n\f\i\f\p\k\v\7\y\n\o\o\6\0\c\b\q\l\w\5\u\n\3\p\j\t\2\e\1\i\2\2\e\f\5\s\8\g\7\g\l\c\z\m\p\2\c\b\f\q\o\e\e\u\6\1\6\4\e\v\l\u\f\a\p\9\j\p\w\e\j\8\9\w\h\b\t\g\f\e\c\c\u\n\h\p\i\b\o\5\z\r\g\x\3\w\b\j\n\5\3\7\8\5\6\b\0\j\o\1\x\v\r\r\m\4\x\s\0\o\g\n\d\x\u\4\g\m\a\m\w\n\2\z\x\3\3\j\a\r\t\v\n\m\g\3\g\r\g\t\s\f\6\t\r\k\0\3\a\0\o\8\5\u\m\3\o\t\x\o\z\p\z\4\q\v\2\s\p\z\2\s\w\0\1\g\i\c\a\z\f\z\x\p\h\q\3\l\k\0\s\1\c\n\h\p\x\x\f\j\c\0\r\j\h\h\l\l\q\o\u\w\l\o\m\a\a\c\l\t\6\n\m\p\c\y\m\y\r\c\z\1\g\o\j\9\x\c\6\p\4 ]] 00:08:27.594 13:42:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:27.594 13:42:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:27.594 13:42:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:27.594 13:42:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:27.594 13:42:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.594 13:42:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:27.594 [2024-10-01 13:42:37.595810] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:27.594 [2024-10-01 13:42:37.595949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60649 ] 00:08:27.594 [2024-10-01 13:42:37.735395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.852 [2024-10-01 13:42:37.856191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.852 [2024-10-01 13:42:37.909536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.110  Copying: 512/512 [B] (average 500 kBps) 00:08:28.110 00:08:28.110 13:42:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qfxgkpcr36jz5xechvnuuq2dh6u6pl9a5skwck2713d1c92takwy4t7r7dep3ilfu40udnt2n62e8nfm2k79l8ude9w77km45sjeq46gqb6p6t0g71b02fixm3yhwizs89tra3uervgb44hmhfj7y1kim09cqsrx0hdc1uph7j7psz4kf9gnf8dsbxyqz8kxegrs4c64f0sryy9p2z0puzvkmzfmls97gzu7eyocpj00d9jt35y0b7200vk2roasjmxs4oo4g5y7cvlf2oxokmodq6svixtw4mc671zqfky8zg7il75a9y23of49zuufyn1olqy5jyu7s2p01r1l444d4r8kiuvq63g0iqif8xlgg8558x9ptnkncga65tu2fxa8nuq7as8gghwdjioactlzcfo1mk2d0sot81t5vb2ww43sq415ybmhykde04ivgtkvej3frzbnj4aly0gepcej9of29ex8fldqv4ihdinmmbsg7yzotpvno43t0b5i == \q\f\x\g\k\p\c\r\3\6\j\z\5\x\e\c\h\v\n\u\u\q\2\d\h\6\u\6\p\l\9\a\5\s\k\w\c\k\2\7\1\3\d\1\c\9\2\t\a\k\w\y\4\t\7\r\7\d\e\p\3\i\l\f\u\4\0\u\d\n\t\2\n\6\2\e\8\n\f\m\2\k\7\9\l\8\u\d\e\9\w\7\7\k\m\4\5\s\j\e\q\4\6\g\q\b\6\p\6\t\0\g\7\1\b\0\2\f\i\x\m\3\y\h\w\i\z\s\8\9\t\r\a\3\u\e\r\v\g\b\4\4\h\m\h\f\j\7\y\1\k\i\m\0\9\c\q\s\r\x\0\h\d\c\1\u\p\h\7\j\7\p\s\z\4\k\f\9\g\n\f\8\d\s\b\x\y\q\z\8\k\x\e\g\r\s\4\c\6\4\f\0\s\r\y\y\9\p\2\z\0\p\u\z\v\k\m\z\f\m\l\s\9\7\g\z\u\7\e\y\o\c\p\j\0\0\d\9\j\t\3\5\y\0\b\7\2\0\0\v\k\2\r\o\a\s\j\m\x\s\4\o\o\4\g\5\y\7\c\v\l\f\2\o\x\o\k\m\o\d\q\6\s\v\i\x\t\w\4\m\c\6\7\1\z\q\f\k\y\8\z\g\7\i\l\7\5\a\9\y\2\3\o\f\4\9\z\u\u\f\y\n\1\o\l\q\y\5\j\y\u\7\s\2\p\0\1\r\1\l\4\4\4\d\4\r\8\k\i\u\v\q\6\3\g\0\i\q\i\f\8\x\l\g\g\8\5\5\8\x\9\p\t\n\k\n\c\g\a\6\5\t\u\2\f\x\a\8\n\u\q\7\a\s\8\g\g\h\w\d\j\i\o\a\c\t\l\z\c\f\o\1\m\k\2\d\0\s\o\t\8\1\t\5\v\b\2\w\w\4\3\s\q\4\1\5\y\b\m\h\y\k\d\e\0\4\i\v\g\t\k\v\e\j\3\f\r\z\b\n\j\4\a\l\y\0\g\e\p\c\e\j\9\o\f\2\9\e\x\8\f\l\d\q\v\4\i\h\d\i\n\m\m\b\s\g\7\y\z\o\t\p\v\n\o\4\3\t\0\b\5\i ]] 00:08:28.110 13:42:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.110 13:42:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:28.110 [2024-10-01 13:42:38.232074] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:28.110 [2024-10-01 13:42:38.232176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60664 ] 00:08:28.367 [2024-10-01 13:42:38.375454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.367 [2024-10-01 13:42:38.497799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.625 [2024-10-01 13:42:38.552557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.883  Copying: 512/512 [B] (average 500 kBps) 00:08:28.883 00:08:28.883 13:42:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qfxgkpcr36jz5xechvnuuq2dh6u6pl9a5skwck2713d1c92takwy4t7r7dep3ilfu40udnt2n62e8nfm2k79l8ude9w77km45sjeq46gqb6p6t0g71b02fixm3yhwizs89tra3uervgb44hmhfj7y1kim09cqsrx0hdc1uph7j7psz4kf9gnf8dsbxyqz8kxegrs4c64f0sryy9p2z0puzvkmzfmls97gzu7eyocpj00d9jt35y0b7200vk2roasjmxs4oo4g5y7cvlf2oxokmodq6svixtw4mc671zqfky8zg7il75a9y23of49zuufyn1olqy5jyu7s2p01r1l444d4r8kiuvq63g0iqif8xlgg8558x9ptnkncga65tu2fxa8nuq7as8gghwdjioactlzcfo1mk2d0sot81t5vb2ww43sq415ybmhykde04ivgtkvej3frzbnj4aly0gepcej9of29ex8fldqv4ihdinmmbsg7yzotpvno43t0b5i == \q\f\x\g\k\p\c\r\3\6\j\z\5\x\e\c\h\v\n\u\u\q\2\d\h\6\u\6\p\l\9\a\5\s\k\w\c\k\2\7\1\3\d\1\c\9\2\t\a\k\w\y\4\t\7\r\7\d\e\p\3\i\l\f\u\4\0\u\d\n\t\2\n\6\2\e\8\n\f\m\2\k\7\9\l\8\u\d\e\9\w\7\7\k\m\4\5\s\j\e\q\4\6\g\q\b\6\p\6\t\0\g\7\1\b\0\2\f\i\x\m\3\y\h\w\i\z\s\8\9\t\r\a\3\u\e\r\v\g\b\4\4\h\m\h\f\j\7\y\1\k\i\m\0\9\c\q\s\r\x\0\h\d\c\1\u\p\h\7\j\7\p\s\z\4\k\f\9\g\n\f\8\d\s\b\x\y\q\z\8\k\x\e\g\r\s\4\c\6\4\f\0\s\r\y\y\9\p\2\z\0\p\u\z\v\k\m\z\f\m\l\s\9\7\g\z\u\7\e\y\o\c\p\j\0\0\d\9\j\t\3\5\y\0\b\7\2\0\0\v\k\2\r\o\a\s\j\m\x\s\4\o\o\4\g\5\y\7\c\v\l\f\2\o\x\o\k\m\o\d\q\6\s\v\i\x\t\w\4\m\c\6\7\1\z\q\f\k\y\8\z\g\7\i\l\7\5\a\9\y\2\3\o\f\4\9\z\u\u\f\y\n\1\o\l\q\y\5\j\y\u\7\s\2\p\0\1\r\1\l\4\4\4\d\4\r\8\k\i\u\v\q\6\3\g\0\i\q\i\f\8\x\l\g\g\8\5\5\8\x\9\p\t\n\k\n\c\g\a\6\5\t\u\2\f\x\a\8\n\u\q\7\a\s\8\g\g\h\w\d\j\i\o\a\c\t\l\z\c\f\o\1\m\k\2\d\0\s\o\t\8\1\t\5\v\b\2\w\w\4\3\s\q\4\1\5\y\b\m\h\y\k\d\e\0\4\i\v\g\t\k\v\e\j\3\f\r\z\b\n\j\4\a\l\y\0\g\e\p\c\e\j\9\o\f\2\9\e\x\8\f\l\d\q\v\4\i\h\d\i\n\m\m\b\s\g\7\y\z\o\t\p\v\n\o\4\3\t\0\b\5\i ]] 00:08:28.883 13:42:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.883 13:42:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:28.883 [2024-10-01 13:42:38.875069] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:28.883 [2024-10-01 13:42:38.875187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60668 ] 00:08:28.883 [2024-10-01 13:42:39.012624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.140 [2024-10-01 13:42:39.134499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.140 [2024-10-01 13:42:39.189508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.460  Copying: 512/512 [B] (average 250 kBps) 00:08:29.460 00:08:29.461 13:42:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qfxgkpcr36jz5xechvnuuq2dh6u6pl9a5skwck2713d1c92takwy4t7r7dep3ilfu40udnt2n62e8nfm2k79l8ude9w77km45sjeq46gqb6p6t0g71b02fixm3yhwizs89tra3uervgb44hmhfj7y1kim09cqsrx0hdc1uph7j7psz4kf9gnf8dsbxyqz8kxegrs4c64f0sryy9p2z0puzvkmzfmls97gzu7eyocpj00d9jt35y0b7200vk2roasjmxs4oo4g5y7cvlf2oxokmodq6svixtw4mc671zqfky8zg7il75a9y23of49zuufyn1olqy5jyu7s2p01r1l444d4r8kiuvq63g0iqif8xlgg8558x9ptnkncga65tu2fxa8nuq7as8gghwdjioactlzcfo1mk2d0sot81t5vb2ww43sq415ybmhykde04ivgtkvej3frzbnj4aly0gepcej9of29ex8fldqv4ihdinmmbsg7yzotpvno43t0b5i == \q\f\x\g\k\p\c\r\3\6\j\z\5\x\e\c\h\v\n\u\u\q\2\d\h\6\u\6\p\l\9\a\5\s\k\w\c\k\2\7\1\3\d\1\c\9\2\t\a\k\w\y\4\t\7\r\7\d\e\p\3\i\l\f\u\4\0\u\d\n\t\2\n\6\2\e\8\n\f\m\2\k\7\9\l\8\u\d\e\9\w\7\7\k\m\4\5\s\j\e\q\4\6\g\q\b\6\p\6\t\0\g\7\1\b\0\2\f\i\x\m\3\y\h\w\i\z\s\8\9\t\r\a\3\u\e\r\v\g\b\4\4\h\m\h\f\j\7\y\1\k\i\m\0\9\c\q\s\r\x\0\h\d\c\1\u\p\h\7\j\7\p\s\z\4\k\f\9\g\n\f\8\d\s\b\x\y\q\z\8\k\x\e\g\r\s\4\c\6\4\f\0\s\r\y\y\9\p\2\z\0\p\u\z\v\k\m\z\f\m\l\s\9\7\g\z\u\7\e\y\o\c\p\j\0\0\d\9\j\t\3\5\y\0\b\7\2\0\0\v\k\2\r\o\a\s\j\m\x\s\4\o\o\4\g\5\y\7\c\v\l\f\2\o\x\o\k\m\o\d\q\6\s\v\i\x\t\w\4\m\c\6\7\1\z\q\f\k\y\8\z\g\7\i\l\7\5\a\9\y\2\3\o\f\4\9\z\u\u\f\y\n\1\o\l\q\y\5\j\y\u\7\s\2\p\0\1\r\1\l\4\4\4\d\4\r\8\k\i\u\v\q\6\3\g\0\i\q\i\f\8\x\l\g\g\8\5\5\8\x\9\p\t\n\k\n\c\g\a\6\5\t\u\2\f\x\a\8\n\u\q\7\a\s\8\g\g\h\w\d\j\i\o\a\c\t\l\z\c\f\o\1\m\k\2\d\0\s\o\t\8\1\t\5\v\b\2\w\w\4\3\s\q\4\1\5\y\b\m\h\y\k\d\e\0\4\i\v\g\t\k\v\e\j\3\f\r\z\b\n\j\4\a\l\y\0\g\e\p\c\e\j\9\o\f\2\9\e\x\8\f\l\d\q\v\4\i\h\d\i\n\m\m\b\s\g\7\y\z\o\t\p\v\n\o\4\3\t\0\b\5\i ]] 00:08:29.461 13:42:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.461 13:42:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:29.461 [2024-10-01 13:42:39.513522] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:29.461 [2024-10-01 13:42:39.513647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60683 ] 00:08:29.719 [2024-10-01 13:42:39.652308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.719 [2024-10-01 13:42:39.774048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.719 [2024-10-01 13:42:39.830188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.977  Copying: 512/512 [B] (average 166 kBps) 00:08:29.977 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qfxgkpcr36jz5xechvnuuq2dh6u6pl9a5skwck2713d1c92takwy4t7r7dep3ilfu40udnt2n62e8nfm2k79l8ude9w77km45sjeq46gqb6p6t0g71b02fixm3yhwizs89tra3uervgb44hmhfj7y1kim09cqsrx0hdc1uph7j7psz4kf9gnf8dsbxyqz8kxegrs4c64f0sryy9p2z0puzvkmzfmls97gzu7eyocpj00d9jt35y0b7200vk2roasjmxs4oo4g5y7cvlf2oxokmodq6svixtw4mc671zqfky8zg7il75a9y23of49zuufyn1olqy5jyu7s2p01r1l444d4r8kiuvq63g0iqif8xlgg8558x9ptnkncga65tu2fxa8nuq7as8gghwdjioactlzcfo1mk2d0sot81t5vb2ww43sq415ybmhykde04ivgtkvej3frzbnj4aly0gepcej9of29ex8fldqv4ihdinmmbsg7yzotpvno43t0b5i == \q\f\x\g\k\p\c\r\3\6\j\z\5\x\e\c\h\v\n\u\u\q\2\d\h\6\u\6\p\l\9\a\5\s\k\w\c\k\2\7\1\3\d\1\c\9\2\t\a\k\w\y\4\t\7\r\7\d\e\p\3\i\l\f\u\4\0\u\d\n\t\2\n\6\2\e\8\n\f\m\2\k\7\9\l\8\u\d\e\9\w\7\7\k\m\4\5\s\j\e\q\4\6\g\q\b\6\p\6\t\0\g\7\1\b\0\2\f\i\x\m\3\y\h\w\i\z\s\8\9\t\r\a\3\u\e\r\v\g\b\4\4\h\m\h\f\j\7\y\1\k\i\m\0\9\c\q\s\r\x\0\h\d\c\1\u\p\h\7\j\7\p\s\z\4\k\f\9\g\n\f\8\d\s\b\x\y\q\z\8\k\x\e\g\r\s\4\c\6\4\f\0\s\r\y\y\9\p\2\z\0\p\u\z\v\k\m\z\f\m\l\s\9\7\g\z\u\7\e\y\o\c\p\j\0\0\d\9\j\t\3\5\y\0\b\7\2\0\0\v\k\2\r\o\a\s\j\m\x\s\4\o\o\4\g\5\y\7\c\v\l\f\2\o\x\o\k\m\o\d\q\6\s\v\i\x\t\w\4\m\c\6\7\1\z\q\f\k\y\8\z\g\7\i\l\7\5\a\9\y\2\3\o\f\4\9\z\u\u\f\y\n\1\o\l\q\y\5\j\y\u\7\s\2\p\0\1\r\1\l\4\4\4\d\4\r\8\k\i\u\v\q\6\3\g\0\i\q\i\f\8\x\l\g\g\8\5\5\8\x\9\p\t\n\k\n\c\g\a\6\5\t\u\2\f\x\a\8\n\u\q\7\a\s\8\g\g\h\w\d\j\i\o\a\c\t\l\z\c\f\o\1\m\k\2\d\0\s\o\t\8\1\t\5\v\b\2\w\w\4\3\s\q\4\1\5\y\b\m\h\y\k\d\e\0\4\i\v\g\t\k\v\e\j\3\f\r\z\b\n\j\4\a\l\y\0\g\e\p\c\e\j\9\o\f\2\9\e\x\8\f\l\d\q\v\4\i\h\d\i\n\m\m\b\s\g\7\y\z\o\t\p\v\n\o\4\3\t\0\b\5\i ]] 00:08:29.978 00:08:29.978 real 0m5.440s 00:08:29.978 user 0m3.278s 00:08:29.978 sys 0m2.457s 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.978 ************************************ 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:29.978 END TEST dd_flags_misc 00:08:29.978 ************************************ 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:29.978 * Second test run, disabling liburing, forcing AIO 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:29.978 ************************************ 00:08:29.978 START TEST dd_flag_append_forced_aio 00:08:29.978 ************************************ 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:29.978 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=m9hknv1zd07b9yefpw1wy39kiwcf98bt 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=82n8f7p2wxoryt4wzhlpf5sa7op6ffg1 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s m9hknv1zd07b9yefpw1wy39kiwcf98bt 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 82n8f7p2wxoryt4wzhlpf5sa7op6ffg1 00:08:30.236 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:30.236 [2024-10-01 13:42:40.221207] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:30.236 [2024-10-01 13:42:40.221331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60714 ] 00:08:30.236 [2024-10-01 13:42:40.364992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.494 [2024-10-01 13:42:40.488455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.494 [2024-10-01 13:42:40.543859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.753  Copying: 32/32 [B] (average 31 kBps) 00:08:30.753 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 82n8f7p2wxoryt4wzhlpf5sa7op6ffg1m9hknv1zd07b9yefpw1wy39kiwcf98bt == \8\2\n\8\f\7\p\2\w\x\o\r\y\t\4\w\z\h\l\p\f\5\s\a\7\o\p\6\f\f\g\1\m\9\h\k\n\v\1\z\d\0\7\b\9\y\e\f\p\w\1\w\y\3\9\k\i\w\c\f\9\8\b\t ]] 00:08:30.753 00:08:30.753 real 0m0.710s 00:08:30.753 user 0m0.420s 00:08:30.753 sys 0m0.161s 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:30.753 ************************************ 00:08:30.753 END TEST dd_flag_append_forced_aio 00:08:30.753 ************************************ 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:30.753 ************************************ 00:08:30.753 START TEST dd_flag_directory_forced_aio 00:08:30.753 ************************************ 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.753 13:42:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.010 [2024-10-01 13:42:40.979440] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:31.010 [2024-10-01 13:42:40.979588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60740 ] 00:08:31.010 [2024-10-01 13:42:41.123844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.268 [2024-10-01 13:42:41.253507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.268 [2024-10-01 13:42:41.310869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.268 [2024-10-01 13:42:41.354519] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:31.268 [2024-10-01 13:42:41.354612] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:31.268 [2024-10-01 13:42:41.354638] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.526 [2024-10-01 13:42:41.475415] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.526 13:42:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:31.526 [2024-10-01 13:42:41.653701] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:31.526 [2024-10-01 13:42:41.654246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60755 ] 00:08:31.785 [2024-10-01 13:42:41.799514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.785 [2024-10-01 13:42:41.936071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.042 [2024-10-01 13:42:41.998664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.042 [2024-10-01 13:42:42.037762] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:32.042 [2024-10-01 13:42:42.038109] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:32.042 [2024-10-01 13:42:42.038131] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.042 [2024-10-01 13:42:42.159606] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.300 00:08:32.300 real 0m1.357s 00:08:32.300 user 0m0.818s 00:08:32.300 sys 0m0.321s 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.300 ************************************ 00:08:32.300 END TEST dd_flag_directory_forced_aio 00:08:32.300 ************************************ 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:32.300 ************************************ 00:08:32.300 START TEST dd_flag_nofollow_forced_aio 00:08:32.300 ************************************ 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:32.300 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.301 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.301 [2024-10-01 13:42:42.387500] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:32.301 [2024-10-01 13:42:42.387785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60784 ] 00:08:32.559 [2024-10-01 13:42:42.531222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.559 [2024-10-01 13:42:42.651526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.559 [2024-10-01 13:42:42.704878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.817 [2024-10-01 13:42:42.742636] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:32.817 [2024-10-01 13:42:42.743001] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:32.817 [2024-10-01 13:42:42.743023] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.817 [2024-10-01 13:42:42.863120] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.817 13:42:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:33.075 [2024-10-01 13:42:43.039501] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:33.075 [2024-10-01 13:42:43.039605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60793 ] 00:08:33.075 [2024-10-01 13:42:43.174496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.333 [2024-10-01 13:42:43.294466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.333 [2024-10-01 13:42:43.347302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.333 [2024-10-01 13:42:43.385084] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:33.333 [2024-10-01 13:42:43.385155] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:33.333 [2024-10-01 13:42:43.385173] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.333 [2024-10-01 13:42:43.510410] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:33.591 13:42:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.591 [2024-10-01 13:42:43.730194] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:33.591 [2024-10-01 13:42:43.731235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60806 ] 00:08:33.849 [2024-10-01 13:42:43.862090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.849 [2024-10-01 13:42:43.991199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.107 [2024-10-01 13:42:44.044206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.365  Copying: 512/512 [B] (average 500 kBps) 00:08:34.365 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 9qgmhn7k4zohh3sprj4ndeebc23iofjh15hgp5roiqpiswvp2hnklarmxv5au0z31kkc86aka723k24v6qz0ehd697yq0yjr683xcyh5bc9t7tyux0idetwn12qeela5e2ly1ii44etap0663l5z0ijbq1lb4l084l6eek8u5x4b9swl416weqir3rl54gbdx91fz8l80fmeurzuhx135se7tqoerdlqfmvjod03k6qtptildzpvcuc0fogaua57jplzbhdxnhrfwpb15pofgrftbrflf64y1oxp33xr4jicrm18skd32vjrw6jk85pkbsokbzxljrmz01c07dd1q41noxn9vz75a3kcoslf8zwdv8yonpq7l9wzy3bjb7fwkmaq7iup90vkbkexxr80pn1ei7gsujryybq1b4j969ax7r3x5rgl9bh79ft8uxqzanfofltrt9kj35ymal91ix29bcdaqsx33cy3dzvrkli5we3vjw330v04v649vkza == \9\q\g\m\h\n\7\k\4\z\o\h\h\3\s\p\r\j\4\n\d\e\e\b\c\2\3\i\o\f\j\h\1\5\h\g\p\5\r\o\i\q\p\i\s\w\v\p\2\h\n\k\l\a\r\m\x\v\5\a\u\0\z\3\1\k\k\c\8\6\a\k\a\7\2\3\k\2\4\v\6\q\z\0\e\h\d\6\9\7\y\q\0\y\j\r\6\8\3\x\c\y\h\5\b\c\9\t\7\t\y\u\x\0\i\d\e\t\w\n\1\2\q\e\e\l\a\5\e\2\l\y\1\i\i\4\4\e\t\a\p\0\6\6\3\l\5\z\0\i\j\b\q\1\l\b\4\l\0\8\4\l\6\e\e\k\8\u\5\x\4\b\9\s\w\l\4\1\6\w\e\q\i\r\3\r\l\5\4\g\b\d\x\9\1\f\z\8\l\8\0\f\m\e\u\r\z\u\h\x\1\3\5\s\e\7\t\q\o\e\r\d\l\q\f\m\v\j\o\d\0\3\k\6\q\t\p\t\i\l\d\z\p\v\c\u\c\0\f\o\g\a\u\a\5\7\j\p\l\z\b\h\d\x\n\h\r\f\w\p\b\1\5\p\o\f\g\r\f\t\b\r\f\l\f\6\4\y\1\o\x\p\3\3\x\r\4\j\i\c\r\m\1\8\s\k\d\3\2\v\j\r\w\6\j\k\8\5\p\k\b\s\o\k\b\z\x\l\j\r\m\z\0\1\c\0\7\d\d\1\q\4\1\n\o\x\n\9\v\z\7\5\a\3\k\c\o\s\l\f\8\z\w\d\v\8\y\o\n\p\q\7\l\9\w\z\y\3\b\j\b\7\f\w\k\m\a\q\7\i\u\p\9\0\v\k\b\k\e\x\x\r\8\0\p\n\1\e\i\7\g\s\u\j\r\y\y\b\q\1\b\4\j\9\6\9\a\x\7\r\3\x\5\r\g\l\9\b\h\7\9\f\t\8\u\x\q\z\a\n\f\o\f\l\t\r\t\9\k\j\3\5\y\m\a\l\9\1\i\x\2\9\b\c\d\a\q\s\x\3\3\c\y\3\d\z\v\r\k\l\i\5\w\e\3\v\j\w\3\3\0\v\0\4\v\6\4\9\v\k\z\a ]] 00:08:34.365 00:08:34.365 real 0m2.029s 00:08:34.365 user 0m1.225s 00:08:34.365 sys 0m0.460s 00:08:34.365 ************************************ 00:08:34.365 END TEST dd_flag_nofollow_forced_aio 00:08:34.365 ************************************ 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:34.365 ************************************ 00:08:34.365 START TEST dd_flag_noatime_forced_aio 00:08:34.365 ************************************ 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1727790164 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1727790164 00:08:34.365 13:42:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:35.302 13:42:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.302 [2024-10-01 13:42:45.475407] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:35.302 [2024-10-01 13:42:45.475760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60847 ] 00:08:35.561 [2024-10-01 13:42:45.605661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.561 [2024-10-01 13:42:45.725384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.842 [2024-10-01 13:42:45.778793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.100  Copying: 512/512 [B] (average 500 kBps) 00:08:36.100 00:08:36.100 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:36.100 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1727790164 )) 00:08:36.100 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.100 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1727790164 )) 00:08:36.100 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.100 [2024-10-01 13:42:46.161755] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:36.100 [2024-10-01 13:42:46.161882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60858 ] 00:08:36.359 [2024-10-01 13:42:46.306062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.359 [2024-10-01 13:42:46.448845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.359 [2024-10-01 13:42:46.504509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.874  Copying: 512/512 [B] (average 500 kBps) 00:08:36.874 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1727790166 )) 00:08:36.874 00:08:36.874 real 0m2.421s 00:08:36.874 user 0m0.848s 00:08:36.874 sys 0m0.324s 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.874 ************************************ 00:08:36.874 END TEST dd_flag_noatime_forced_aio 00:08:36.874 ************************************ 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:36.874 ************************************ 00:08:36.874 START TEST dd_flags_misc_forced_aio 00:08:36.874 ************************************ 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:36.874 13:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:36.874 [2024-10-01 13:42:46.915761] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:36.874 [2024-10-01 13:42:46.915868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60890 ] 00:08:36.874 [2024-10-01 13:42:47.048247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.131 [2024-10-01 13:42:47.170110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.131 [2024-10-01 13:42:47.224558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.389  Copying: 512/512 [B] (average 500 kBps) 00:08:37.389 00:08:37.389 13:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s9onse3wcoba2uzln9tczr6puc9vsqw6llljy3bfw0hd9t8bnvtci5ugm2bc6ilmguzdiu53jtthsoiaej2fx3mfwp3swvr0ahqc5csmf7afcsqpsmwagmd5tjnmok8qyssxq3cpd6ks95h8nuhwxic32xecuzs9zgkue9o1vd6e5hlk7wty5kg43ca68zu4eedfc31qzuzr9k5civ3brfvdb3f514k502599nhiwou6hx8gy0x24mtlt029ydcmybgprf67hvqjgsv36liwf5zeywhcvor1pm7qrheb1cqn5rqirmm8bm0vsm2kboi5q4vipz6h24jk4r9d6gayab3gpoospf1y2wh4irh2gqkb949t6gsu5itqzl1ahnv307ypgn8kfff5vgo4ckpbdzcfeyprlbggkvkiwdts3amgjsbjtz27l69r5o1ovovu69g6gvymyplblt2j4rpe6jcpbmlx6k6848z2q71z19h03n7lswlso2t5qiqjv3zf == \s\9\o\n\s\e\3\w\c\o\b\a\2\u\z\l\n\9\t\c\z\r\6\p\u\c\9\v\s\q\w\6\l\l\l\j\y\3\b\f\w\0\h\d\9\t\8\b\n\v\t\c\i\5\u\g\m\2\b\c\6\i\l\m\g\u\z\d\i\u\5\3\j\t\t\h\s\o\i\a\e\j\2\f\x\3\m\f\w\p\3\s\w\v\r\0\a\h\q\c\5\c\s\m\f\7\a\f\c\s\q\p\s\m\w\a\g\m\d\5\t\j\n\m\o\k\8\q\y\s\s\x\q\3\c\p\d\6\k\s\9\5\h\8\n\u\h\w\x\i\c\3\2\x\e\c\u\z\s\9\z\g\k\u\e\9\o\1\v\d\6\e\5\h\l\k\7\w\t\y\5\k\g\4\3\c\a\6\8\z\u\4\e\e\d\f\c\3\1\q\z\u\z\r\9\k\5\c\i\v\3\b\r\f\v\d\b\3\f\5\1\4\k\5\0\2\5\9\9\n\h\i\w\o\u\6\h\x\8\g\y\0\x\2\4\m\t\l\t\0\2\9\y\d\c\m\y\b\g\p\r\f\6\7\h\v\q\j\g\s\v\3\6\l\i\w\f\5\z\e\y\w\h\c\v\o\r\1\p\m\7\q\r\h\e\b\1\c\q\n\5\r\q\i\r\m\m\8\b\m\0\v\s\m\2\k\b\o\i\5\q\4\v\i\p\z\6\h\2\4\j\k\4\r\9\d\6\g\a\y\a\b\3\g\p\o\o\s\p\f\1\y\2\w\h\4\i\r\h\2\g\q\k\b\9\4\9\t\6\g\s\u\5\i\t\q\z\l\1\a\h\n\v\3\0\7\y\p\g\n\8\k\f\f\f\5\v\g\o\4\c\k\p\b\d\z\c\f\e\y\p\r\l\b\g\g\k\v\k\i\w\d\t\s\3\a\m\g\j\s\b\j\t\z\2\7\l\6\9\r\5\o\1\o\v\o\v\u\6\9\g\6\g\v\y\m\y\p\l\b\l\t\2\j\4\r\p\e\6\j\c\p\b\m\l\x\6\k\6\8\4\8\z\2\q\7\1\z\1\9\h\0\3\n\7\l\s\w\l\s\o\2\t\5\q\i\q\j\v\3\z\f ]] 00:08:37.389 13:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:37.389 13:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:37.647 [2024-10-01 13:42:47.624762] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:37.647 [2024-10-01 13:42:47.625782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60898 ] 00:08:37.647 [2024-10-01 13:42:47.770158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.905 [2024-10-01 13:42:47.950693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.905 [2024-10-01 13:42:48.008500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.472  Copying: 512/512 [B] (average 500 kBps) 00:08:38.472 00:08:38.472 13:42:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s9onse3wcoba2uzln9tczr6puc9vsqw6llljy3bfw0hd9t8bnvtci5ugm2bc6ilmguzdiu53jtthsoiaej2fx3mfwp3swvr0ahqc5csmf7afcsqpsmwagmd5tjnmok8qyssxq3cpd6ks95h8nuhwxic32xecuzs9zgkue9o1vd6e5hlk7wty5kg43ca68zu4eedfc31qzuzr9k5civ3brfvdb3f514k502599nhiwou6hx8gy0x24mtlt029ydcmybgprf67hvqjgsv36liwf5zeywhcvor1pm7qrheb1cqn5rqirmm8bm0vsm2kboi5q4vipz6h24jk4r9d6gayab3gpoospf1y2wh4irh2gqkb949t6gsu5itqzl1ahnv307ypgn8kfff5vgo4ckpbdzcfeyprlbggkvkiwdts3amgjsbjtz27l69r5o1ovovu69g6gvymyplblt2j4rpe6jcpbmlx6k6848z2q71z19h03n7lswlso2t5qiqjv3zf == \s\9\o\n\s\e\3\w\c\o\b\a\2\u\z\l\n\9\t\c\z\r\6\p\u\c\9\v\s\q\w\6\l\l\l\j\y\3\b\f\w\0\h\d\9\t\8\b\n\v\t\c\i\5\u\g\m\2\b\c\6\i\l\m\g\u\z\d\i\u\5\3\j\t\t\h\s\o\i\a\e\j\2\f\x\3\m\f\w\p\3\s\w\v\r\0\a\h\q\c\5\c\s\m\f\7\a\f\c\s\q\p\s\m\w\a\g\m\d\5\t\j\n\m\o\k\8\q\y\s\s\x\q\3\c\p\d\6\k\s\9\5\h\8\n\u\h\w\x\i\c\3\2\x\e\c\u\z\s\9\z\g\k\u\e\9\o\1\v\d\6\e\5\h\l\k\7\w\t\y\5\k\g\4\3\c\a\6\8\z\u\4\e\e\d\f\c\3\1\q\z\u\z\r\9\k\5\c\i\v\3\b\r\f\v\d\b\3\f\5\1\4\k\5\0\2\5\9\9\n\h\i\w\o\u\6\h\x\8\g\y\0\x\2\4\m\t\l\t\0\2\9\y\d\c\m\y\b\g\p\r\f\6\7\h\v\q\j\g\s\v\3\6\l\i\w\f\5\z\e\y\w\h\c\v\o\r\1\p\m\7\q\r\h\e\b\1\c\q\n\5\r\q\i\r\m\m\8\b\m\0\v\s\m\2\k\b\o\i\5\q\4\v\i\p\z\6\h\2\4\j\k\4\r\9\d\6\g\a\y\a\b\3\g\p\o\o\s\p\f\1\y\2\w\h\4\i\r\h\2\g\q\k\b\9\4\9\t\6\g\s\u\5\i\t\q\z\l\1\a\h\n\v\3\0\7\y\p\g\n\8\k\f\f\f\5\v\g\o\4\c\k\p\b\d\z\c\f\e\y\p\r\l\b\g\g\k\v\k\i\w\d\t\s\3\a\m\g\j\s\b\j\t\z\2\7\l\6\9\r\5\o\1\o\v\o\v\u\6\9\g\6\g\v\y\m\y\p\l\b\l\t\2\j\4\r\p\e\6\j\c\p\b\m\l\x\6\k\6\8\4\8\z\2\q\7\1\z\1\9\h\0\3\n\7\l\s\w\l\s\o\2\t\5\q\i\q\j\v\3\z\f ]] 00:08:38.472 13:42:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:38.472 13:42:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:38.472 [2024-10-01 13:42:48.401058] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:38.472 [2024-10-01 13:42:48.401207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ] 00:08:38.472 [2024-10-01 13:42:48.535898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.729 [2024-10-01 13:42:48.698702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.729 [2024-10-01 13:42:48.780035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.304  Copying: 512/512 [B] (average 125 kBps) 00:08:39.304 00:08:39.304 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s9onse3wcoba2uzln9tczr6puc9vsqw6llljy3bfw0hd9t8bnvtci5ugm2bc6ilmguzdiu53jtthsoiaej2fx3mfwp3swvr0ahqc5csmf7afcsqpsmwagmd5tjnmok8qyssxq3cpd6ks95h8nuhwxic32xecuzs9zgkue9o1vd6e5hlk7wty5kg43ca68zu4eedfc31qzuzr9k5civ3brfvdb3f514k502599nhiwou6hx8gy0x24mtlt029ydcmybgprf67hvqjgsv36liwf5zeywhcvor1pm7qrheb1cqn5rqirmm8bm0vsm2kboi5q4vipz6h24jk4r9d6gayab3gpoospf1y2wh4irh2gqkb949t6gsu5itqzl1ahnv307ypgn8kfff5vgo4ckpbdzcfeyprlbggkvkiwdts3amgjsbjtz27l69r5o1ovovu69g6gvymyplblt2j4rpe6jcpbmlx6k6848z2q71z19h03n7lswlso2t5qiqjv3zf == \s\9\o\n\s\e\3\w\c\o\b\a\2\u\z\l\n\9\t\c\z\r\6\p\u\c\9\v\s\q\w\6\l\l\l\j\y\3\b\f\w\0\h\d\9\t\8\b\n\v\t\c\i\5\u\g\m\2\b\c\6\i\l\m\g\u\z\d\i\u\5\3\j\t\t\h\s\o\i\a\e\j\2\f\x\3\m\f\w\p\3\s\w\v\r\0\a\h\q\c\5\c\s\m\f\7\a\f\c\s\q\p\s\m\w\a\g\m\d\5\t\j\n\m\o\k\8\q\y\s\s\x\q\3\c\p\d\6\k\s\9\5\h\8\n\u\h\w\x\i\c\3\2\x\e\c\u\z\s\9\z\g\k\u\e\9\o\1\v\d\6\e\5\h\l\k\7\w\t\y\5\k\g\4\3\c\a\6\8\z\u\4\e\e\d\f\c\3\1\q\z\u\z\r\9\k\5\c\i\v\3\b\r\f\v\d\b\3\f\5\1\4\k\5\0\2\5\9\9\n\h\i\w\o\u\6\h\x\8\g\y\0\x\2\4\m\t\l\t\0\2\9\y\d\c\m\y\b\g\p\r\f\6\7\h\v\q\j\g\s\v\3\6\l\i\w\f\5\z\e\y\w\h\c\v\o\r\1\p\m\7\q\r\h\e\b\1\c\q\n\5\r\q\i\r\m\m\8\b\m\0\v\s\m\2\k\b\o\i\5\q\4\v\i\p\z\6\h\2\4\j\k\4\r\9\d\6\g\a\y\a\b\3\g\p\o\o\s\p\f\1\y\2\w\h\4\i\r\h\2\g\q\k\b\9\4\9\t\6\g\s\u\5\i\t\q\z\l\1\a\h\n\v\3\0\7\y\p\g\n\8\k\f\f\f\5\v\g\o\4\c\k\p\b\d\z\c\f\e\y\p\r\l\b\g\g\k\v\k\i\w\d\t\s\3\a\m\g\j\s\b\j\t\z\2\7\l\6\9\r\5\o\1\o\v\o\v\u\6\9\g\6\g\v\y\m\y\p\l\b\l\t\2\j\4\r\p\e\6\j\c\p\b\m\l\x\6\k\6\8\4\8\z\2\q\7\1\z\1\9\h\0\3\n\7\l\s\w\l\s\o\2\t\5\q\i\q\j\v\3\z\f ]] 00:08:39.304 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:39.304 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:39.304 [2024-10-01 13:42:49.244622] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:39.304 [2024-10-01 13:42:49.244744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60918 ] 00:08:39.304 [2024-10-01 13:42:49.381837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.591 [2024-10-01 13:42:49.513731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.591 [2024-10-01 13:42:49.573948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.849  Copying: 512/512 [B] (average 500 kBps) 00:08:39.849 00:08:39.849 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s9onse3wcoba2uzln9tczr6puc9vsqw6llljy3bfw0hd9t8bnvtci5ugm2bc6ilmguzdiu53jtthsoiaej2fx3mfwp3swvr0ahqc5csmf7afcsqpsmwagmd5tjnmok8qyssxq3cpd6ks95h8nuhwxic32xecuzs9zgkue9o1vd6e5hlk7wty5kg43ca68zu4eedfc31qzuzr9k5civ3brfvdb3f514k502599nhiwou6hx8gy0x24mtlt029ydcmybgprf67hvqjgsv36liwf5zeywhcvor1pm7qrheb1cqn5rqirmm8bm0vsm2kboi5q4vipz6h24jk4r9d6gayab3gpoospf1y2wh4irh2gqkb949t6gsu5itqzl1ahnv307ypgn8kfff5vgo4ckpbdzcfeyprlbggkvkiwdts3amgjsbjtz27l69r5o1ovovu69g6gvymyplblt2j4rpe6jcpbmlx6k6848z2q71z19h03n7lswlso2t5qiqjv3zf == \s\9\o\n\s\e\3\w\c\o\b\a\2\u\z\l\n\9\t\c\z\r\6\p\u\c\9\v\s\q\w\6\l\l\l\j\y\3\b\f\w\0\h\d\9\t\8\b\n\v\t\c\i\5\u\g\m\2\b\c\6\i\l\m\g\u\z\d\i\u\5\3\j\t\t\h\s\o\i\a\e\j\2\f\x\3\m\f\w\p\3\s\w\v\r\0\a\h\q\c\5\c\s\m\f\7\a\f\c\s\q\p\s\m\w\a\g\m\d\5\t\j\n\m\o\k\8\q\y\s\s\x\q\3\c\p\d\6\k\s\9\5\h\8\n\u\h\w\x\i\c\3\2\x\e\c\u\z\s\9\z\g\k\u\e\9\o\1\v\d\6\e\5\h\l\k\7\w\t\y\5\k\g\4\3\c\a\6\8\z\u\4\e\e\d\f\c\3\1\q\z\u\z\r\9\k\5\c\i\v\3\b\r\f\v\d\b\3\f\5\1\4\k\5\0\2\5\9\9\n\h\i\w\o\u\6\h\x\8\g\y\0\x\2\4\m\t\l\t\0\2\9\y\d\c\m\y\b\g\p\r\f\6\7\h\v\q\j\g\s\v\3\6\l\i\w\f\5\z\e\y\w\h\c\v\o\r\1\p\m\7\q\r\h\e\b\1\c\q\n\5\r\q\i\r\m\m\8\b\m\0\v\s\m\2\k\b\o\i\5\q\4\v\i\p\z\6\h\2\4\j\k\4\r\9\d\6\g\a\y\a\b\3\g\p\o\o\s\p\f\1\y\2\w\h\4\i\r\h\2\g\q\k\b\9\4\9\t\6\g\s\u\5\i\t\q\z\l\1\a\h\n\v\3\0\7\y\p\g\n\8\k\f\f\f\5\v\g\o\4\c\k\p\b\d\z\c\f\e\y\p\r\l\b\g\g\k\v\k\i\w\d\t\s\3\a\m\g\j\s\b\j\t\z\2\7\l\6\9\r\5\o\1\o\v\o\v\u\6\9\g\6\g\v\y\m\y\p\l\b\l\t\2\j\4\r\p\e\6\j\c\p\b\m\l\x\6\k\6\8\4\8\z\2\q\7\1\z\1\9\h\0\3\n\7\l\s\w\l\s\o\2\t\5\q\i\q\j\v\3\z\f ]] 00:08:39.849 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:39.849 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:39.849 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:39.849 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:39.849 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:39.849 13:42:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:39.849 [2024-10-01 13:42:49.922710] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:39.849 [2024-10-01 13:42:49.922814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60931 ] 00:08:40.106 [2024-10-01 13:42:50.052970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.106 [2024-10-01 13:42:50.200068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.106 [2024-10-01 13:42:50.253353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.621  Copying: 512/512 [B] (average 500 kBps) 00:08:40.621 00:08:40.621 13:42:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i5sosti12zxpn1s5gvj6miwduxlg69nkax1wmhf6c5uywz0lrb6pf7dudppdcp2jfzohp9t12psq468jbzju9aixzb29k11l8b1vq09kaj8ddk4tcihv6z2hjcuam0g8yxshyxok69rhzli485hkfstv85f1cb2s3atgtlgn6xel7ycug24x2sx9exqh2lf51zj7e59nptbasm0hd48nvknifu55289ps9obxt79gpzolzq9hv3j0upet90nub9j35cqz09nbeam23wut8rc2zj6xxgc44qzh9cswmw469jldt42stutv9uqy0qzquy0tm8n4ggpbjodz7838puslat9c1k4e7m1nr1u1as39njegw6qbdz3xqbq8z7sghip2jztkikgvvu7rwna5is6szxeoy0j8fhos7tc9dmrf9qttlxd2lzhn101oaqmoup5ob8lh7n5t9trncks3fewmqixpvf8l07l1v7jdcssf245acga9nln012mzccv3dbf == \i\5\s\o\s\t\i\1\2\z\x\p\n\1\s\5\g\v\j\6\m\i\w\d\u\x\l\g\6\9\n\k\a\x\1\w\m\h\f\6\c\5\u\y\w\z\0\l\r\b\6\p\f\7\d\u\d\p\p\d\c\p\2\j\f\z\o\h\p\9\t\1\2\p\s\q\4\6\8\j\b\z\j\u\9\a\i\x\z\b\2\9\k\1\1\l\8\b\1\v\q\0\9\k\a\j\8\d\d\k\4\t\c\i\h\v\6\z\2\h\j\c\u\a\m\0\g\8\y\x\s\h\y\x\o\k\6\9\r\h\z\l\i\4\8\5\h\k\f\s\t\v\8\5\f\1\c\b\2\s\3\a\t\g\t\l\g\n\6\x\e\l\7\y\c\u\g\2\4\x\2\s\x\9\e\x\q\h\2\l\f\5\1\z\j\7\e\5\9\n\p\t\b\a\s\m\0\h\d\4\8\n\v\k\n\i\f\u\5\5\2\8\9\p\s\9\o\b\x\t\7\9\g\p\z\o\l\z\q\9\h\v\3\j\0\u\p\e\t\9\0\n\u\b\9\j\3\5\c\q\z\0\9\n\b\e\a\m\2\3\w\u\t\8\r\c\2\z\j\6\x\x\g\c\4\4\q\z\h\9\c\s\w\m\w\4\6\9\j\l\d\t\4\2\s\t\u\t\v\9\u\q\y\0\q\z\q\u\y\0\t\m\8\n\4\g\g\p\b\j\o\d\z\7\8\3\8\p\u\s\l\a\t\9\c\1\k\4\e\7\m\1\n\r\1\u\1\a\s\3\9\n\j\e\g\w\6\q\b\d\z\3\x\q\b\q\8\z\7\s\g\h\i\p\2\j\z\t\k\i\k\g\v\v\u\7\r\w\n\a\5\i\s\6\s\z\x\e\o\y\0\j\8\f\h\o\s\7\t\c\9\d\m\r\f\9\q\t\t\l\x\d\2\l\z\h\n\1\0\1\o\a\q\m\o\u\p\5\o\b\8\l\h\7\n\5\t\9\t\r\n\c\k\s\3\f\e\w\m\q\i\x\p\v\f\8\l\0\7\l\1\v\7\j\d\c\s\s\f\2\4\5\a\c\g\a\9\n\l\n\0\1\2\m\z\c\c\v\3\d\b\f ]] 00:08:40.621 13:42:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.621 13:42:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:40.621 [2024-10-01 13:42:50.605530] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:40.621 [2024-10-01 13:42:50.605645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60939 ] 00:08:40.622 [2024-10-01 13:42:50.743417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.879 [2024-10-01 13:42:50.863458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.879 [2024-10-01 13:42:50.917120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.137  Copying: 512/512 [B] (average 500 kBps) 00:08:41.137 00:08:41.137 13:42:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i5sosti12zxpn1s5gvj6miwduxlg69nkax1wmhf6c5uywz0lrb6pf7dudppdcp2jfzohp9t12psq468jbzju9aixzb29k11l8b1vq09kaj8ddk4tcihv6z2hjcuam0g8yxshyxok69rhzli485hkfstv85f1cb2s3atgtlgn6xel7ycug24x2sx9exqh2lf51zj7e59nptbasm0hd48nvknifu55289ps9obxt79gpzolzq9hv3j0upet90nub9j35cqz09nbeam23wut8rc2zj6xxgc44qzh9cswmw469jldt42stutv9uqy0qzquy0tm8n4ggpbjodz7838puslat9c1k4e7m1nr1u1as39njegw6qbdz3xqbq8z7sghip2jztkikgvvu7rwna5is6szxeoy0j8fhos7tc9dmrf9qttlxd2lzhn101oaqmoup5ob8lh7n5t9trncks3fewmqixpvf8l07l1v7jdcssf245acga9nln012mzccv3dbf == \i\5\s\o\s\t\i\1\2\z\x\p\n\1\s\5\g\v\j\6\m\i\w\d\u\x\l\g\6\9\n\k\a\x\1\w\m\h\f\6\c\5\u\y\w\z\0\l\r\b\6\p\f\7\d\u\d\p\p\d\c\p\2\j\f\z\o\h\p\9\t\1\2\p\s\q\4\6\8\j\b\z\j\u\9\a\i\x\z\b\2\9\k\1\1\l\8\b\1\v\q\0\9\k\a\j\8\d\d\k\4\t\c\i\h\v\6\z\2\h\j\c\u\a\m\0\g\8\y\x\s\h\y\x\o\k\6\9\r\h\z\l\i\4\8\5\h\k\f\s\t\v\8\5\f\1\c\b\2\s\3\a\t\g\t\l\g\n\6\x\e\l\7\y\c\u\g\2\4\x\2\s\x\9\e\x\q\h\2\l\f\5\1\z\j\7\e\5\9\n\p\t\b\a\s\m\0\h\d\4\8\n\v\k\n\i\f\u\5\5\2\8\9\p\s\9\o\b\x\t\7\9\g\p\z\o\l\z\q\9\h\v\3\j\0\u\p\e\t\9\0\n\u\b\9\j\3\5\c\q\z\0\9\n\b\e\a\m\2\3\w\u\t\8\r\c\2\z\j\6\x\x\g\c\4\4\q\z\h\9\c\s\w\m\w\4\6\9\j\l\d\t\4\2\s\t\u\t\v\9\u\q\y\0\q\z\q\u\y\0\t\m\8\n\4\g\g\p\b\j\o\d\z\7\8\3\8\p\u\s\l\a\t\9\c\1\k\4\e\7\m\1\n\r\1\u\1\a\s\3\9\n\j\e\g\w\6\q\b\d\z\3\x\q\b\q\8\z\7\s\g\h\i\p\2\j\z\t\k\i\k\g\v\v\u\7\r\w\n\a\5\i\s\6\s\z\x\e\o\y\0\j\8\f\h\o\s\7\t\c\9\d\m\r\f\9\q\t\t\l\x\d\2\l\z\h\n\1\0\1\o\a\q\m\o\u\p\5\o\b\8\l\h\7\n\5\t\9\t\r\n\c\k\s\3\f\e\w\m\q\i\x\p\v\f\8\l\0\7\l\1\v\7\j\d\c\s\s\f\2\4\5\a\c\g\a\9\n\l\n\0\1\2\m\z\c\c\v\3\d\b\f ]] 00:08:41.137 13:42:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:41.137 13:42:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:41.137 [2024-10-01 13:42:51.259195] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:41.137 [2024-10-01 13:42:51.259347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60946 ] 00:08:41.394 [2024-10-01 13:42:51.398556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.394 [2024-10-01 13:42:51.521488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.651 [2024-10-01 13:42:51.576981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.909  Copying: 512/512 [B] (average 250 kBps) 00:08:41.909 00:08:41.909 13:42:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i5sosti12zxpn1s5gvj6miwduxlg69nkax1wmhf6c5uywz0lrb6pf7dudppdcp2jfzohp9t12psq468jbzju9aixzb29k11l8b1vq09kaj8ddk4tcihv6z2hjcuam0g8yxshyxok69rhzli485hkfstv85f1cb2s3atgtlgn6xel7ycug24x2sx9exqh2lf51zj7e59nptbasm0hd48nvknifu55289ps9obxt79gpzolzq9hv3j0upet90nub9j35cqz09nbeam23wut8rc2zj6xxgc44qzh9cswmw469jldt42stutv9uqy0qzquy0tm8n4ggpbjodz7838puslat9c1k4e7m1nr1u1as39njegw6qbdz3xqbq8z7sghip2jztkikgvvu7rwna5is6szxeoy0j8fhos7tc9dmrf9qttlxd2lzhn101oaqmoup5ob8lh7n5t9trncks3fewmqixpvf8l07l1v7jdcssf245acga9nln012mzccv3dbf == \i\5\s\o\s\t\i\1\2\z\x\p\n\1\s\5\g\v\j\6\m\i\w\d\u\x\l\g\6\9\n\k\a\x\1\w\m\h\f\6\c\5\u\y\w\z\0\l\r\b\6\p\f\7\d\u\d\p\p\d\c\p\2\j\f\z\o\h\p\9\t\1\2\p\s\q\4\6\8\j\b\z\j\u\9\a\i\x\z\b\2\9\k\1\1\l\8\b\1\v\q\0\9\k\a\j\8\d\d\k\4\t\c\i\h\v\6\z\2\h\j\c\u\a\m\0\g\8\y\x\s\h\y\x\o\k\6\9\r\h\z\l\i\4\8\5\h\k\f\s\t\v\8\5\f\1\c\b\2\s\3\a\t\g\t\l\g\n\6\x\e\l\7\y\c\u\g\2\4\x\2\s\x\9\e\x\q\h\2\l\f\5\1\z\j\7\e\5\9\n\p\t\b\a\s\m\0\h\d\4\8\n\v\k\n\i\f\u\5\5\2\8\9\p\s\9\o\b\x\t\7\9\g\p\z\o\l\z\q\9\h\v\3\j\0\u\p\e\t\9\0\n\u\b\9\j\3\5\c\q\z\0\9\n\b\e\a\m\2\3\w\u\t\8\r\c\2\z\j\6\x\x\g\c\4\4\q\z\h\9\c\s\w\m\w\4\6\9\j\l\d\t\4\2\s\t\u\t\v\9\u\q\y\0\q\z\q\u\y\0\t\m\8\n\4\g\g\p\b\j\o\d\z\7\8\3\8\p\u\s\l\a\t\9\c\1\k\4\e\7\m\1\n\r\1\u\1\a\s\3\9\n\j\e\g\w\6\q\b\d\z\3\x\q\b\q\8\z\7\s\g\h\i\p\2\j\z\t\k\i\k\g\v\v\u\7\r\w\n\a\5\i\s\6\s\z\x\e\o\y\0\j\8\f\h\o\s\7\t\c\9\d\m\r\f\9\q\t\t\l\x\d\2\l\z\h\n\1\0\1\o\a\q\m\o\u\p\5\o\b\8\l\h\7\n\5\t\9\t\r\n\c\k\s\3\f\e\w\m\q\i\x\p\v\f\8\l\0\7\l\1\v\7\j\d\c\s\s\f\2\4\5\a\c\g\a\9\n\l\n\0\1\2\m\z\c\c\v\3\d\b\f ]] 00:08:41.909 13:42:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:41.909 13:42:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:41.909 [2024-10-01 13:42:51.930785] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:41.909 [2024-10-01 13:42:51.930898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:08:41.909 [2024-10-01 13:42:52.068112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.166 [2024-10-01 13:42:52.185335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.166 [2024-10-01 13:42:52.238863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.423  Copying: 512/512 [B] (average 250 kBps) 00:08:42.423 00:08:42.423 ************************************ 00:08:42.423 END TEST dd_flags_misc_forced_aio 00:08:42.423 ************************************ 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i5sosti12zxpn1s5gvj6miwduxlg69nkax1wmhf6c5uywz0lrb6pf7dudppdcp2jfzohp9t12psq468jbzju9aixzb29k11l8b1vq09kaj8ddk4tcihv6z2hjcuam0g8yxshyxok69rhzli485hkfstv85f1cb2s3atgtlgn6xel7ycug24x2sx9exqh2lf51zj7e59nptbasm0hd48nvknifu55289ps9obxt79gpzolzq9hv3j0upet90nub9j35cqz09nbeam23wut8rc2zj6xxgc44qzh9cswmw469jldt42stutv9uqy0qzquy0tm8n4ggpbjodz7838puslat9c1k4e7m1nr1u1as39njegw6qbdz3xqbq8z7sghip2jztkikgvvu7rwna5is6szxeoy0j8fhos7tc9dmrf9qttlxd2lzhn101oaqmoup5ob8lh7n5t9trncks3fewmqixpvf8l07l1v7jdcssf245acga9nln012mzccv3dbf == \i\5\s\o\s\t\i\1\2\z\x\p\n\1\s\5\g\v\j\6\m\i\w\d\u\x\l\g\6\9\n\k\a\x\1\w\m\h\f\6\c\5\u\y\w\z\0\l\r\b\6\p\f\7\d\u\d\p\p\d\c\p\2\j\f\z\o\h\p\9\t\1\2\p\s\q\4\6\8\j\b\z\j\u\9\a\i\x\z\b\2\9\k\1\1\l\8\b\1\v\q\0\9\k\a\j\8\d\d\k\4\t\c\i\h\v\6\z\2\h\j\c\u\a\m\0\g\8\y\x\s\h\y\x\o\k\6\9\r\h\z\l\i\4\8\5\h\k\f\s\t\v\8\5\f\1\c\b\2\s\3\a\t\g\t\l\g\n\6\x\e\l\7\y\c\u\g\2\4\x\2\s\x\9\e\x\q\h\2\l\f\5\1\z\j\7\e\5\9\n\p\t\b\a\s\m\0\h\d\4\8\n\v\k\n\i\f\u\5\5\2\8\9\p\s\9\o\b\x\t\7\9\g\p\z\o\l\z\q\9\h\v\3\j\0\u\p\e\t\9\0\n\u\b\9\j\3\5\c\q\z\0\9\n\b\e\a\m\2\3\w\u\t\8\r\c\2\z\j\6\x\x\g\c\4\4\q\z\h\9\c\s\w\m\w\4\6\9\j\l\d\t\4\2\s\t\u\t\v\9\u\q\y\0\q\z\q\u\y\0\t\m\8\n\4\g\g\p\b\j\o\d\z\7\8\3\8\p\u\s\l\a\t\9\c\1\k\4\e\7\m\1\n\r\1\u\1\a\s\3\9\n\j\e\g\w\6\q\b\d\z\3\x\q\b\q\8\z\7\s\g\h\i\p\2\j\z\t\k\i\k\g\v\v\u\7\r\w\n\a\5\i\s\6\s\z\x\e\o\y\0\j\8\f\h\o\s\7\t\c\9\d\m\r\f\9\q\t\t\l\x\d\2\l\z\h\n\1\0\1\o\a\q\m\o\u\p\5\o\b\8\l\h\7\n\5\t\9\t\r\n\c\k\s\3\f\e\w\m\q\i\x\p\v\f\8\l\0\7\l\1\v\7\j\d\c\s\s\f\2\4\5\a\c\g\a\9\n\l\n\0\1\2\m\z\c\c\v\3\d\b\f ]] 00:08:42.423 00:08:42.423 real 0m5.665s 00:08:42.423 user 0m3.358s 00:08:42.423 sys 0m1.304s 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:42.423 ************************************ 00:08:42.423 END TEST spdk_dd_posix 00:08:42.423 ************************************ 00:08:42.423 00:08:42.423 real 0m25.101s 00:08:42.423 user 0m13.721s 00:08:42.423 sys 0m7.494s 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.423 13:42:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:42.681 13:42:52 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:42.681 13:42:52 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.681 13:42:52 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.681 13:42:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:42.681 ************************************ 00:08:42.681 START TEST spdk_dd_malloc 00:08:42.681 ************************************ 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:42.681 * Looking for test storage... 00:08:42.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.681 --rc genhtml_branch_coverage=1 00:08:42.681 --rc genhtml_function_coverage=1 00:08:42.681 --rc genhtml_legend=1 00:08:42.681 --rc geninfo_all_blocks=1 00:08:42.681 --rc geninfo_unexecuted_blocks=1 00:08:42.681 00:08:42.681 ' 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.681 --rc genhtml_branch_coverage=1 00:08:42.681 --rc genhtml_function_coverage=1 00:08:42.681 --rc genhtml_legend=1 00:08:42.681 --rc geninfo_all_blocks=1 00:08:42.681 --rc geninfo_unexecuted_blocks=1 00:08:42.681 00:08:42.681 ' 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.681 --rc genhtml_branch_coverage=1 00:08:42.681 --rc genhtml_function_coverage=1 00:08:42.681 --rc genhtml_legend=1 00:08:42.681 --rc geninfo_all_blocks=1 00:08:42.681 --rc geninfo_unexecuted_blocks=1 00:08:42.681 00:08:42.681 ' 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.681 --rc genhtml_branch_coverage=1 00:08:42.681 --rc genhtml_function_coverage=1 00:08:42.681 --rc genhtml_legend=1 00:08:42.681 --rc geninfo_all_blocks=1 00:08:42.681 --rc geninfo_unexecuted_blocks=1 00:08:42.681 00:08:42.681 ' 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:42.681 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:42.682 ************************************ 00:08:42.682 START TEST dd_malloc_copy 00:08:42.682 ************************************ 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:42.682 13:42:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:42.939 { 00:08:42.939 "subsystems": [ 00:08:42.939 { 00:08:42.939 "subsystem": "bdev", 00:08:42.939 "config": [ 00:08:42.939 { 00:08:42.939 "params": { 00:08:42.939 "block_size": 512, 00:08:42.939 "num_blocks": 1048576, 00:08:42.939 "name": "malloc0" 00:08:42.939 }, 00:08:42.939 "method": "bdev_malloc_create" 00:08:42.939 }, 00:08:42.939 { 00:08:42.939 "params": { 00:08:42.939 "block_size": 512, 00:08:42.939 "num_blocks": 1048576, 00:08:42.939 "name": "malloc1" 00:08:42.939 }, 00:08:42.939 "method": "bdev_malloc_create" 00:08:42.939 }, 00:08:42.939 { 00:08:42.939 "method": "bdev_wait_for_examine" 00:08:42.939 } 00:08:42.939 ] 00:08:42.939 } 00:08:42.939 ] 00:08:42.939 } 00:08:42.939 [2024-10-01 13:42:52.870399] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:42.939 [2024-10-01 13:42:52.870511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:08:42.939 [2024-10-01 13:42:53.010023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.197 [2024-10-01 13:42:53.143962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.197 [2024-10-01 13:42:53.202709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.015  Copying: 197/512 [MB] (197 MBps) Copying: 384/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 193 MBps) 00:08:47.015 00:08:47.015 13:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:47.015 13:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:47.015 13:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:47.015 13:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:47.273 [2024-10-01 13:42:57.200821] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:47.273 [2024-10-01 13:42:57.200966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61092 ] 00:08:47.273 { 00:08:47.273 "subsystems": [ 00:08:47.273 { 00:08:47.273 "subsystem": "bdev", 00:08:47.273 "config": [ 00:08:47.273 { 00:08:47.273 "params": { 00:08:47.273 "block_size": 512, 00:08:47.273 "num_blocks": 1048576, 00:08:47.273 "name": "malloc0" 00:08:47.273 }, 00:08:47.273 "method": "bdev_malloc_create" 00:08:47.273 }, 00:08:47.273 { 00:08:47.273 "params": { 00:08:47.273 "block_size": 512, 00:08:47.273 "num_blocks": 1048576, 00:08:47.273 "name": "malloc1" 00:08:47.273 }, 00:08:47.273 "method": "bdev_malloc_create" 00:08:47.273 }, 00:08:47.273 { 00:08:47.273 "method": "bdev_wait_for_examine" 00:08:47.273 } 00:08:47.273 ] 00:08:47.273 } 00:08:47.273 ] 00:08:47.273 } 00:08:47.273 [2024-10-01 13:42:57.342804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.529 [2024-10-01 13:42:57.510524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.529 [2024-10-01 13:42:57.594842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.406  Copying: 196/512 [MB] (196 MBps) Copying: 348/512 [MB] (151 MBps) Copying: 512/512 [MB] (average 178 MBps) 00:08:51.406 00:08:51.406 ************************************ 00:08:51.406 END TEST dd_malloc_copy 00:08:51.406 ************************************ 00:08:51.406 00:08:51.406 real 0m8.700s 00:08:51.406 user 0m7.503s 00:08:51.406 sys 0m1.028s 00:08:51.406 13:43:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.406 13:43:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:51.406 ************************************ 00:08:51.406 END TEST spdk_dd_malloc 00:08:51.406 ************************************ 00:08:51.406 00:08:51.406 real 0m8.939s 00:08:51.406 user 0m7.648s 00:08:51.406 sys 0m1.123s 00:08:51.406 13:43:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.406 13:43:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:51.665 13:43:01 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:51.665 13:43:01 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:51.665 13:43:01 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.665 13:43:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:51.665 ************************************ 00:08:51.665 START TEST spdk_dd_bdev_to_bdev 00:08:51.665 ************************************ 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:51.665 * Looking for test storage... 00:08:51.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.665 --rc genhtml_branch_coverage=1 00:08:51.665 --rc genhtml_function_coverage=1 00:08:51.665 --rc genhtml_legend=1 00:08:51.665 --rc geninfo_all_blocks=1 00:08:51.665 --rc geninfo_unexecuted_blocks=1 00:08:51.665 00:08:51.665 ' 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.665 --rc genhtml_branch_coverage=1 00:08:51.665 --rc genhtml_function_coverage=1 00:08:51.665 --rc genhtml_legend=1 00:08:51.665 --rc geninfo_all_blocks=1 00:08:51.665 --rc geninfo_unexecuted_blocks=1 00:08:51.665 00:08:51.665 ' 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.665 --rc genhtml_branch_coverage=1 00:08:51.665 --rc genhtml_function_coverage=1 00:08:51.665 --rc genhtml_legend=1 00:08:51.665 --rc geninfo_all_blocks=1 00:08:51.665 --rc geninfo_unexecuted_blocks=1 00:08:51.665 00:08:51.665 ' 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.665 --rc genhtml_branch_coverage=1 00:08:51.665 --rc genhtml_function_coverage=1 00:08:51.665 --rc genhtml_legend=1 00:08:51.665 --rc geninfo_all_blocks=1 00:08:51.665 --rc geninfo_unexecuted_blocks=1 00:08:51.665 00:08:51.665 ' 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.665 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:51.666 ************************************ 00:08:51.666 START TEST dd_inflate_file 00:08:51.666 ************************************ 00:08:51.666 13:43:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:51.924 [2024-10-01 13:43:01.852351] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:51.924 [2024-10-01 13:43:01.852488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61221 ] 00:08:51.924 [2024-10-01 13:43:01.991023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.924 [2024-10-01 13:43:02.097946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.181 [2024-10-01 13:43:02.150689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.440  Copying: 64/64 [MB] (average 1641 MBps) 00:08:52.440 00:08:52.440 00:08:52.440 real 0m0.688s 00:08:52.440 user 0m0.431s 00:08:52.440 sys 0m0.306s 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:52.440 ************************************ 00:08:52.440 END TEST dd_inflate_file 00:08:52.440 ************************************ 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:52.440 ************************************ 00:08:52.440 START TEST dd_copy_to_out_bdev 00:08:52.440 ************************************ 00:08:52.440 13:43:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:52.440 [2024-10-01 13:43:02.580450] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:52.440 { 00:08:52.440 "subsystems": [ 00:08:52.440 { 00:08:52.440 "subsystem": "bdev", 00:08:52.440 "config": [ 00:08:52.440 { 00:08:52.440 "params": { 00:08:52.440 "trtype": "pcie", 00:08:52.440 "traddr": "0000:00:10.0", 00:08:52.440 "name": "Nvme0" 00:08:52.440 }, 00:08:52.440 "method": "bdev_nvme_attach_controller" 00:08:52.440 }, 00:08:52.440 { 00:08:52.440 "params": { 00:08:52.440 "trtype": "pcie", 00:08:52.440 "traddr": "0000:00:11.0", 00:08:52.440 "name": "Nvme1" 00:08:52.440 }, 00:08:52.440 "method": "bdev_nvme_attach_controller" 00:08:52.440 }, 00:08:52.440 { 00:08:52.440 "method": "bdev_wait_for_examine" 00:08:52.440 } 00:08:52.440 ] 00:08:52.440 } 00:08:52.440 ] 00:08:52.440 } 00:08:52.440 [2024-10-01 13:43:02.581306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:08:52.698 [2024-10-01 13:43:02.722128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.698 [2024-10-01 13:43:02.849020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.955 [2024-10-01 13:43:02.902198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.151  Copying: 64/64 [MB] (average 68 MBps) 00:08:54.151 00:08:54.151 00:08:54.151 real 0m1.749s 00:08:54.151 user 0m1.513s 00:08:54.151 sys 0m1.286s 00:08:54.151 ************************************ 00:08:54.151 END TEST dd_copy_to_out_bdev 00:08:54.151 ************************************ 00:08:54.151 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.151 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:54.151 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:54.151 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:54.151 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.151 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.151 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:54.152 ************************************ 00:08:54.152 START TEST dd_offset_magic 00:08:54.152 ************************************ 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:54.152 13:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:54.409 [2024-10-01 13:43:04.376488] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:54.409 [2024-10-01 13:43:04.376588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61294 ] 00:08:54.409 { 00:08:54.409 "subsystems": [ 00:08:54.409 { 00:08:54.409 "subsystem": "bdev", 00:08:54.409 "config": [ 00:08:54.409 { 00:08:54.409 "params": { 00:08:54.409 "trtype": "pcie", 00:08:54.409 "traddr": "0000:00:10.0", 00:08:54.409 "name": "Nvme0" 00:08:54.410 }, 00:08:54.410 "method": "bdev_nvme_attach_controller" 00:08:54.410 }, 00:08:54.410 { 00:08:54.410 "params": { 00:08:54.410 "trtype": "pcie", 00:08:54.410 "traddr": "0000:00:11.0", 00:08:54.410 "name": "Nvme1" 00:08:54.410 }, 00:08:54.410 "method": "bdev_nvme_attach_controller" 00:08:54.410 }, 00:08:54.410 { 00:08:54.410 "method": "bdev_wait_for_examine" 00:08:54.410 } 00:08:54.410 ] 00:08:54.410 } 00:08:54.410 ] 00:08:54.410 } 00:08:54.410 [2024-10-01 13:43:04.509413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.667 [2024-10-01 13:43:04.634168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.667 [2024-10-01 13:43:04.688902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.181  Copying: 65/65 [MB] (average 942 MBps) 00:08:55.181 00:08:55.181 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:55.181 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:55.182 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:55.182 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:55.182 { 00:08:55.182 "subsystems": [ 00:08:55.182 { 00:08:55.182 "subsystem": "bdev", 00:08:55.182 "config": [ 00:08:55.182 { 00:08:55.182 "params": { 00:08:55.182 "trtype": "pcie", 00:08:55.182 "traddr": "0000:00:10.0", 00:08:55.182 "name": "Nvme0" 00:08:55.182 }, 00:08:55.182 "method": "bdev_nvme_attach_controller" 00:08:55.182 }, 00:08:55.182 { 00:08:55.182 "params": { 00:08:55.182 "trtype": "pcie", 00:08:55.182 "traddr": "0000:00:11.0", 00:08:55.182 "name": "Nvme1" 00:08:55.182 }, 00:08:55.182 "method": "bdev_nvme_attach_controller" 00:08:55.182 }, 00:08:55.182 { 00:08:55.182 "method": "bdev_wait_for_examine" 00:08:55.182 } 00:08:55.182 ] 00:08:55.182 } 00:08:55.182 ] 00:08:55.182 } 00:08:55.182 [2024-10-01 13:43:05.262240] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:55.182 [2024-10-01 13:43:05.262364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:08:55.439 [2024-10-01 13:43:05.395292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.439 [2024-10-01 13:43:05.522667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.439 [2024-10-01 13:43:05.576956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.953  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:55.953 00:08:55.953 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:55.953 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:55.953 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:55.953 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:55.953 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:55.953 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:55.953 13:43:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 [2024-10-01 13:43:06.044444] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:55.953 [2024-10-01 13:43:06.044798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61336 ] 00:08:55.953 { 00:08:55.953 "subsystems": [ 00:08:55.953 { 00:08:55.953 "subsystem": "bdev", 00:08:55.953 "config": [ 00:08:55.953 { 00:08:55.953 "params": { 00:08:55.953 "trtype": "pcie", 00:08:55.953 "traddr": "0000:00:10.0", 00:08:55.953 "name": "Nvme0" 00:08:55.953 }, 00:08:55.953 "method": "bdev_nvme_attach_controller" 00:08:55.953 }, 00:08:55.953 { 00:08:55.953 "params": { 00:08:55.953 "trtype": "pcie", 00:08:55.953 "traddr": "0000:00:11.0", 00:08:55.953 "name": "Nvme1" 00:08:55.953 }, 00:08:55.953 "method": "bdev_nvme_attach_controller" 00:08:55.953 }, 00:08:55.953 { 00:08:55.953 "method": "bdev_wait_for_examine" 00:08:55.953 } 00:08:55.953 ] 00:08:55.953 } 00:08:55.953 ] 00:08:55.953 } 00:08:56.210 [2024-10-01 13:43:06.180604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.210 [2024-10-01 13:43:06.300996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.210 [2024-10-01 13:43:06.355623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.032  Copying: 65/65 [MB] (average 1065 MBps) 00:08:57.032 00:08:57.032 13:43:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:57.032 13:43:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:57.032 13:43:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:57.032 13:43:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:57.032 [2024-10-01 13:43:06.963563] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:57.032 [2024-10-01 13:43:06.963660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61356 ] 00:08:57.032 { 00:08:57.032 "subsystems": [ 00:08:57.032 { 00:08:57.032 "subsystem": "bdev", 00:08:57.032 "config": [ 00:08:57.032 { 00:08:57.032 "params": { 00:08:57.032 "trtype": "pcie", 00:08:57.032 "traddr": "0000:00:10.0", 00:08:57.032 "name": "Nvme0" 00:08:57.032 }, 00:08:57.032 "method": "bdev_nvme_attach_controller" 00:08:57.032 }, 00:08:57.032 { 00:08:57.032 "params": { 00:08:57.032 "trtype": "pcie", 00:08:57.032 "traddr": "0000:00:11.0", 00:08:57.032 "name": "Nvme1" 00:08:57.032 }, 00:08:57.032 "method": "bdev_nvme_attach_controller" 00:08:57.032 }, 00:08:57.032 { 00:08:57.032 "method": "bdev_wait_for_examine" 00:08:57.032 } 00:08:57.032 ] 00:08:57.032 } 00:08:57.032 ] 00:08:57.032 } 00:08:57.032 [2024-10-01 13:43:07.103309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.290 [2024-10-01 13:43:07.222302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.290 [2024-10-01 13:43:07.277413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.548  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:57.548 00:08:57.548 ************************************ 00:08:57.548 END TEST dd_offset_magic 00:08:57.548 ************************************ 00:08:57.548 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:57.548 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:57.548 00:08:57.548 real 0m3.375s 00:08:57.548 user 0m2.482s 00:08:57.548 sys 0m0.958s 00:08:57.548 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.548 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:57.806 13:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:57.806 { 00:08:57.806 "subsystems": [ 00:08:57.806 { 00:08:57.806 "subsystem": "bdev", 00:08:57.806 "config": [ 00:08:57.806 { 00:08:57.806 "params": { 00:08:57.806 "trtype": "pcie", 00:08:57.806 "traddr": "0000:00:10.0", 00:08:57.806 "name": "Nvme0" 00:08:57.806 }, 00:08:57.806 "method": "bdev_nvme_attach_controller" 00:08:57.806 }, 00:08:57.806 { 00:08:57.806 "params": { 00:08:57.806 "trtype": "pcie", 00:08:57.806 "traddr": "0000:00:11.0", 00:08:57.806 "name": "Nvme1" 00:08:57.806 }, 00:08:57.806 "method": "bdev_nvme_attach_controller" 00:08:57.806 }, 00:08:57.806 { 00:08:57.806 "method": "bdev_wait_for_examine" 00:08:57.806 } 00:08:57.806 ] 00:08:57.806 } 00:08:57.806 ] 00:08:57.806 } 00:08:57.806 [2024-10-01 13:43:07.795718] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:57.806 [2024-10-01 13:43:07.795814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61391 ] 00:08:57.806 [2024-10-01 13:43:07.935281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.065 [2024-10-01 13:43:08.059036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.065 [2024-10-01 13:43:08.117956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.584  Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:58.584 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:58.584 13:43:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:58.584 [2024-10-01 13:43:08.598057] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:58.584 [2024-10-01 13:43:08.598441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61406 ] 00:08:58.584 { 00:08:58.584 "subsystems": [ 00:08:58.584 { 00:08:58.584 "subsystem": "bdev", 00:08:58.584 "config": [ 00:08:58.584 { 00:08:58.584 "params": { 00:08:58.584 "trtype": "pcie", 00:08:58.584 "traddr": "0000:00:10.0", 00:08:58.584 "name": "Nvme0" 00:08:58.584 }, 00:08:58.584 "method": "bdev_nvme_attach_controller" 00:08:58.584 }, 00:08:58.584 { 00:08:58.584 "params": { 00:08:58.584 "trtype": "pcie", 00:08:58.584 "traddr": "0000:00:11.0", 00:08:58.584 "name": "Nvme1" 00:08:58.584 }, 00:08:58.584 "method": "bdev_nvme_attach_controller" 00:08:58.584 }, 00:08:58.584 { 00:08:58.584 "method": "bdev_wait_for_examine" 00:08:58.584 } 00:08:58.584 ] 00:08:58.584 } 00:08:58.584 ] 00:08:58.584 } 00:08:58.584 [2024-10-01 13:43:08.735209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.843 [2024-10-01 13:43:08.856643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.843 [2024-10-01 13:43:08.914161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.360  Copying: 5120/5120 [kB] (average 833 MBps) 00:08:59.360 00:08:59.360 13:43:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:59.360 ************************************ 00:08:59.360 END TEST spdk_dd_bdev_to_bdev 00:08:59.360 ************************************ 00:08:59.360 00:08:59.360 real 0m7.750s 00:08:59.360 user 0m5.720s 00:08:59.360 sys 0m3.296s 00:08:59.360 13:43:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.360 13:43:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:59.360 13:43:09 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:59.360 13:43:09 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:59.360 13:43:09 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.360 13:43:09 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.360 13:43:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:59.360 ************************************ 00:08:59.360 START TEST spdk_dd_uring 00:08:59.360 ************************************ 00:08:59.360 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:59.360 * Looking for test storage... 00:08:59.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:59.360 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:59.360 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:59.360 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.621 --rc genhtml_branch_coverage=1 00:08:59.621 --rc genhtml_function_coverage=1 00:08:59.621 --rc genhtml_legend=1 00:08:59.621 --rc geninfo_all_blocks=1 00:08:59.621 --rc geninfo_unexecuted_blocks=1 00:08:59.621 00:08:59.621 ' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.621 --rc genhtml_branch_coverage=1 00:08:59.621 --rc genhtml_function_coverage=1 00:08:59.621 --rc genhtml_legend=1 00:08:59.621 --rc geninfo_all_blocks=1 00:08:59.621 --rc geninfo_unexecuted_blocks=1 00:08:59.621 00:08:59.621 ' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.621 --rc genhtml_branch_coverage=1 00:08:59.621 --rc genhtml_function_coverage=1 00:08:59.621 --rc genhtml_legend=1 00:08:59.621 --rc geninfo_all_blocks=1 00:08:59.621 --rc geninfo_unexecuted_blocks=1 00:08:59.621 00:08:59.621 ' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.621 --rc genhtml_branch_coverage=1 00:08:59.621 --rc genhtml_function_coverage=1 00:08:59.621 --rc genhtml_legend=1 00:08:59.621 --rc geninfo_all_blocks=1 00:08:59.621 --rc geninfo_unexecuted_blocks=1 00:08:59.621 00:08:59.621 ' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.621 13:43:09 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:59.621 ************************************ 00:08:59.621 START TEST dd_uring_copy 00:08:59.621 ************************************ 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=5t1sifecwf17dish6xqg5u2hq4kex4a38e7c5ww3s0xqsb8148bz22slzufmjtkblw7cxhbczwe6bum67ys1l1v3y4uhye06lysqjk1e5l5phd6fxlqw0el0nfm9k920ccs06bve31c933j08k93wbx2jxxtpcefl4rsigjdzybaywzvqswmih9sprvs0m54gkc9xuo4fakz1bv0ace2btf195pahg8so8y0jl0irtu4rrtb47i7cmpj0xoyfglx4x8pxkz46sgu8bn25yi5du7n56nkme7dukt4zo93hfw2fn4i1v4wv0cvpxaiyojwzjpnusorjb70zg29gj7bmgq6tnm9rp2hvhop0vmtubntgh6wzx0u79u7eu87x5s3gmapga8ru3yhna52xbpjhfrwx2povv0bve80um0otea9f6j8c9eohs6h6ovc6345bjrgo5b66pwrcvsodc5oauo3m8lap88jbro6m1lyr6tkjyf1g6ln3uii54lz8xi1pvwtwpocccv0aqy7okbathhd42cihfyphlfqixwqdkll4kmo95wftxyxetor6mqjlxo9bvkm3r9frv7we3deu8uek4npblg2yp8t5lzz03c5w9fagv965ry3s6ariap171u5r2b4n4lmhpwzmatj8e29txck74lzq9qajp8er0j130p28kku5uqwypa03h30lwym7d07f5gmdyg6ngi5f8qlfpbwmediwj3zrmfnsx5wrsc7rece9b9tievab4erju3ia6bgt2khr8qz1mbyifp872qe0fx76ehssg8w1e7j4i7q4naai6yltcsxm4tytqyjbwz0r5lxjcgt7f4eta262gfgkjqcblu1iinqavpb61nxawiqspazgxmdyrk6xzroeokw1q8agphznvsdcvfwa02aysljk1wpk7nwd88f1qq3votb4m421ib5rsos7rafuf7vbov1p3fr1olpstdfor2ztap5msez3rywnz3v4jv2pq4kpqxp6mo7kni8 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 5t1sifecwf17dish6xqg5u2hq4kex4a38e7c5ww3s0xqsb8148bz22slzufmjtkblw7cxhbczwe6bum67ys1l1v3y4uhye06lysqjk1e5l5phd6fxlqw0el0nfm9k920ccs06bve31c933j08k93wbx2jxxtpcefl4rsigjdzybaywzvqswmih9sprvs0m54gkc9xuo4fakz1bv0ace2btf195pahg8so8y0jl0irtu4rrtb47i7cmpj0xoyfglx4x8pxkz46sgu8bn25yi5du7n56nkme7dukt4zo93hfw2fn4i1v4wv0cvpxaiyojwzjpnusorjb70zg29gj7bmgq6tnm9rp2hvhop0vmtubntgh6wzx0u79u7eu87x5s3gmapga8ru3yhna52xbpjhfrwx2povv0bve80um0otea9f6j8c9eohs6h6ovc6345bjrgo5b66pwrcvsodc5oauo3m8lap88jbro6m1lyr6tkjyf1g6ln3uii54lz8xi1pvwtwpocccv0aqy7okbathhd42cihfyphlfqixwqdkll4kmo95wftxyxetor6mqjlxo9bvkm3r9frv7we3deu8uek4npblg2yp8t5lzz03c5w9fagv965ry3s6ariap171u5r2b4n4lmhpwzmatj8e29txck74lzq9qajp8er0j130p28kku5uqwypa03h30lwym7d07f5gmdyg6ngi5f8qlfpbwmediwj3zrmfnsx5wrsc7rece9b9tievab4erju3ia6bgt2khr8qz1mbyifp872qe0fx76ehssg8w1e7j4i7q4naai6yltcsxm4tytqyjbwz0r5lxjcgt7f4eta262gfgkjqcblu1iinqavpb61nxawiqspazgxmdyrk6xzroeokw1q8agphznvsdcvfwa02aysljk1wpk7nwd88f1qq3votb4m421ib5rsos7rafuf7vbov1p3fr1olpstdfor2ztap5msez3rywnz3v4jv2pq4kpqxp6mo7kni8 00:08:59.622 13:43:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:59.622 [2024-10-01 13:43:09.695969] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:59.622 [2024-10-01 13:43:09.696088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61484 ] 00:08:59.888 [2024-10-01 13:43:09.838573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.888 [2024-10-01 13:43:09.969155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.888 [2024-10-01 13:43:10.028541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.082  Copying: 511/511 [MB] (average 1286 MBps) 00:09:01.082 00:09:01.082 13:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:01.082 13:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:01.082 13:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:01.082 13:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:01.082 [2024-10-01 13:43:11.168169] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:01.082 [2024-10-01 13:43:11.168290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61511 ] 00:09:01.082 { 00:09:01.082 "subsystems": [ 00:09:01.083 { 00:09:01.083 "subsystem": "bdev", 00:09:01.083 "config": [ 00:09:01.083 { 00:09:01.083 "params": { 00:09:01.083 "block_size": 512, 00:09:01.083 "num_blocks": 1048576, 00:09:01.083 "name": "malloc0" 00:09:01.083 }, 00:09:01.083 "method": "bdev_malloc_create" 00:09:01.083 }, 00:09:01.083 { 00:09:01.083 "params": { 00:09:01.083 "filename": "/dev/zram1", 00:09:01.083 "name": "uring0" 00:09:01.083 }, 00:09:01.083 "method": "bdev_uring_create" 00:09:01.083 }, 00:09:01.083 { 00:09:01.083 "method": "bdev_wait_for_examine" 00:09:01.083 } 00:09:01.083 ] 00:09:01.083 } 00:09:01.083 ] 00:09:01.083 } 00:09:01.341 [2024-10-01 13:43:11.304228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.341 [2024-10-01 13:43:11.417277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.341 [2024-10-01 13:43:11.478718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.786  Copying: 216/512 [MB] (216 MBps) Copying: 408/512 [MB] (191 MBps) Copying: 512/512 [MB] (average 205 MBps) 00:09:04.787 00:09:04.787 13:43:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:04.787 13:43:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:04.787 13:43:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:04.787 13:43:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:04.787 { 00:09:04.787 "subsystems": [ 00:09:04.787 { 00:09:04.787 "subsystem": "bdev", 00:09:04.787 "config": [ 00:09:04.787 { 00:09:04.787 "params": { 00:09:04.787 "block_size": 512, 00:09:04.787 "num_blocks": 1048576, 00:09:04.787 "name": "malloc0" 00:09:04.787 }, 00:09:04.787 "method": "bdev_malloc_create" 00:09:04.787 }, 00:09:04.787 { 00:09:04.787 "params": { 00:09:04.787 "filename": "/dev/zram1", 00:09:04.787 "name": "uring0" 00:09:04.787 }, 00:09:04.787 "method": "bdev_uring_create" 00:09:04.787 }, 00:09:04.787 { 00:09:04.787 "method": "bdev_wait_for_examine" 00:09:04.787 } 00:09:04.787 ] 00:09:04.787 } 00:09:04.787 ] 00:09:04.787 } 00:09:04.787 [2024-10-01 13:43:14.894447] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:04.787 [2024-10-01 13:43:14.894553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:09:05.045 [2024-10-01 13:43:15.033434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.045 [2024-10-01 13:43:15.188179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.303 [2024-10-01 13:43:15.268115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.407  Copying: 153/512 [MB] (153 MBps) Copying: 296/512 [MB] (142 MBps) Copying: 468/512 [MB] (172 MBps) Copying: 512/512 [MB] (average 148 MBps) 00:09:09.407 00:09:09.407 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:09.408 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 5t1sifecwf17dish6xqg5u2hq4kex4a38e7c5ww3s0xqsb8148bz22slzufmjtkblw7cxhbczwe6bum67ys1l1v3y4uhye06lysqjk1e5l5phd6fxlqw0el0nfm9k920ccs06bve31c933j08k93wbx2jxxtpcefl4rsigjdzybaywzvqswmih9sprvs0m54gkc9xuo4fakz1bv0ace2btf195pahg8so8y0jl0irtu4rrtb47i7cmpj0xoyfglx4x8pxkz46sgu8bn25yi5du7n56nkme7dukt4zo93hfw2fn4i1v4wv0cvpxaiyojwzjpnusorjb70zg29gj7bmgq6tnm9rp2hvhop0vmtubntgh6wzx0u79u7eu87x5s3gmapga8ru3yhna52xbpjhfrwx2povv0bve80um0otea9f6j8c9eohs6h6ovc6345bjrgo5b66pwrcvsodc5oauo3m8lap88jbro6m1lyr6tkjyf1g6ln3uii54lz8xi1pvwtwpocccv0aqy7okbathhd42cihfyphlfqixwqdkll4kmo95wftxyxetor6mqjlxo9bvkm3r9frv7we3deu8uek4npblg2yp8t5lzz03c5w9fagv965ry3s6ariap171u5r2b4n4lmhpwzmatj8e29txck74lzq9qajp8er0j130p28kku5uqwypa03h30lwym7d07f5gmdyg6ngi5f8qlfpbwmediwj3zrmfnsx5wrsc7rece9b9tievab4erju3ia6bgt2khr8qz1mbyifp872qe0fx76ehssg8w1e7j4i7q4naai6yltcsxm4tytqyjbwz0r5lxjcgt7f4eta262gfgkjqcblu1iinqavpb61nxawiqspazgxmdyrk6xzroeokw1q8agphznvsdcvfwa02aysljk1wpk7nwd88f1qq3votb4m421ib5rsos7rafuf7vbov1p3fr1olpstdfor2ztap5msez3rywnz3v4jv2pq4kpqxp6mo7kni8 == \5\t\1\s\i\f\e\c\w\f\1\7\d\i\s\h\6\x\q\g\5\u\2\h\q\4\k\e\x\4\a\3\8\e\7\c\5\w\w\3\s\0\x\q\s\b\8\1\4\8\b\z\2\2\s\l\z\u\f\m\j\t\k\b\l\w\7\c\x\h\b\c\z\w\e\6\b\u\m\6\7\y\s\1\l\1\v\3\y\4\u\h\y\e\0\6\l\y\s\q\j\k\1\e\5\l\5\p\h\d\6\f\x\l\q\w\0\e\l\0\n\f\m\9\k\9\2\0\c\c\s\0\6\b\v\e\3\1\c\9\3\3\j\0\8\k\9\3\w\b\x\2\j\x\x\t\p\c\e\f\l\4\r\s\i\g\j\d\z\y\b\a\y\w\z\v\q\s\w\m\i\h\9\s\p\r\v\s\0\m\5\4\g\k\c\9\x\u\o\4\f\a\k\z\1\b\v\0\a\c\e\2\b\t\f\1\9\5\p\a\h\g\8\s\o\8\y\0\j\l\0\i\r\t\u\4\r\r\t\b\4\7\i\7\c\m\p\j\0\x\o\y\f\g\l\x\4\x\8\p\x\k\z\4\6\s\g\u\8\b\n\2\5\y\i\5\d\u\7\n\5\6\n\k\m\e\7\d\u\k\t\4\z\o\9\3\h\f\w\2\f\n\4\i\1\v\4\w\v\0\c\v\p\x\a\i\y\o\j\w\z\j\p\n\u\s\o\r\j\b\7\0\z\g\2\9\g\j\7\b\m\g\q\6\t\n\m\9\r\p\2\h\v\h\o\p\0\v\m\t\u\b\n\t\g\h\6\w\z\x\0\u\7\9\u\7\e\u\8\7\x\5\s\3\g\m\a\p\g\a\8\r\u\3\y\h\n\a\5\2\x\b\p\j\h\f\r\w\x\2\p\o\v\v\0\b\v\e\8\0\u\m\0\o\t\e\a\9\f\6\j\8\c\9\e\o\h\s\6\h\6\o\v\c\6\3\4\5\b\j\r\g\o\5\b\6\6\p\w\r\c\v\s\o\d\c\5\o\a\u\o\3\m\8\l\a\p\8\8\j\b\r\o\6\m\1\l\y\r\6\t\k\j\y\f\1\g\6\l\n\3\u\i\i\5\4\l\z\8\x\i\1\p\v\w\t\w\p\o\c\c\c\v\0\a\q\y\7\o\k\b\a\t\h\h\d\4\2\c\i\h\f\y\p\h\l\f\q\i\x\w\q\d\k\l\l\4\k\m\o\9\5\w\f\t\x\y\x\e\t\o\r\6\m\q\j\l\x\o\9\b\v\k\m\3\r\9\f\r\v\7\w\e\3\d\e\u\8\u\e\k\4\n\p\b\l\g\2\y\p\8\t\5\l\z\z\0\3\c\5\w\9\f\a\g\v\9\6\5\r\y\3\s\6\a\r\i\a\p\1\7\1\u\5\r\2\b\4\n\4\l\m\h\p\w\z\m\a\t\j\8\e\2\9\t\x\c\k\7\4\l\z\q\9\q\a\j\p\8\e\r\0\j\1\3\0\p\2\8\k\k\u\5\u\q\w\y\p\a\0\3\h\3\0\l\w\y\m\7\d\0\7\f\5\g\m\d\y\g\6\n\g\i\5\f\8\q\l\f\p\b\w\m\e\d\i\w\j\3\z\r\m\f\n\s\x\5\w\r\s\c\7\r\e\c\e\9\b\9\t\i\e\v\a\b\4\e\r\j\u\3\i\a\6\b\g\t\2\k\h\r\8\q\z\1\m\b\y\i\f\p\8\7\2\q\e\0\f\x\7\6\e\h\s\s\g\8\w\1\e\7\j\4\i\7\q\4\n\a\a\i\6\y\l\t\c\s\x\m\4\t\y\t\q\y\j\b\w\z\0\r\5\l\x\j\c\g\t\7\f\4\e\t\a\2\6\2\g\f\g\k\j\q\c\b\l\u\1\i\i\n\q\a\v\p\b\6\1\n\x\a\w\i\q\s\p\a\z\g\x\m\d\y\r\k\6\x\z\r\o\e\o\k\w\1\q\8\a\g\p\h\z\n\v\s\d\c\v\f\w\a\0\2\a\y\s\l\j\k\1\w\p\k\7\n\w\d\8\8\f\1\q\q\3\v\o\t\b\4\m\4\2\1\i\b\5\r\s\o\s\7\r\a\f\u\f\7\v\b\o\v\1\p\3\f\r\1\o\l\p\s\t\d\f\o\r\2\z\t\a\p\5\m\s\e\z\3\r\y\w\n\z\3\v\4\j\v\2\p\q\4\k\p\q\x\p\6\m\o\7\k\n\i\8 ]] 00:09:09.408 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:09.408 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 5t1sifecwf17dish6xqg5u2hq4kex4a38e7c5ww3s0xqsb8148bz22slzufmjtkblw7cxhbczwe6bum67ys1l1v3y4uhye06lysqjk1e5l5phd6fxlqw0el0nfm9k920ccs06bve31c933j08k93wbx2jxxtpcefl4rsigjdzybaywzvqswmih9sprvs0m54gkc9xuo4fakz1bv0ace2btf195pahg8so8y0jl0irtu4rrtb47i7cmpj0xoyfglx4x8pxkz46sgu8bn25yi5du7n56nkme7dukt4zo93hfw2fn4i1v4wv0cvpxaiyojwzjpnusorjb70zg29gj7bmgq6tnm9rp2hvhop0vmtubntgh6wzx0u79u7eu87x5s3gmapga8ru3yhna52xbpjhfrwx2povv0bve80um0otea9f6j8c9eohs6h6ovc6345bjrgo5b66pwrcvsodc5oauo3m8lap88jbro6m1lyr6tkjyf1g6ln3uii54lz8xi1pvwtwpocccv0aqy7okbathhd42cihfyphlfqixwqdkll4kmo95wftxyxetor6mqjlxo9bvkm3r9frv7we3deu8uek4npblg2yp8t5lzz03c5w9fagv965ry3s6ariap171u5r2b4n4lmhpwzmatj8e29txck74lzq9qajp8er0j130p28kku5uqwypa03h30lwym7d07f5gmdyg6ngi5f8qlfpbwmediwj3zrmfnsx5wrsc7rece9b9tievab4erju3ia6bgt2khr8qz1mbyifp872qe0fx76ehssg8w1e7j4i7q4naai6yltcsxm4tytqyjbwz0r5lxjcgt7f4eta262gfgkjqcblu1iinqavpb61nxawiqspazgxmdyrk6xzroeokw1q8agphznvsdcvfwa02aysljk1wpk7nwd88f1qq3votb4m421ib5rsos7rafuf7vbov1p3fr1olpstdfor2ztap5msez3rywnz3v4jv2pq4kpqxp6mo7kni8 == \5\t\1\s\i\f\e\c\w\f\1\7\d\i\s\h\6\x\q\g\5\u\2\h\q\4\k\e\x\4\a\3\8\e\7\c\5\w\w\3\s\0\x\q\s\b\8\1\4\8\b\z\2\2\s\l\z\u\f\m\j\t\k\b\l\w\7\c\x\h\b\c\z\w\e\6\b\u\m\6\7\y\s\1\l\1\v\3\y\4\u\h\y\e\0\6\l\y\s\q\j\k\1\e\5\l\5\p\h\d\6\f\x\l\q\w\0\e\l\0\n\f\m\9\k\9\2\0\c\c\s\0\6\b\v\e\3\1\c\9\3\3\j\0\8\k\9\3\w\b\x\2\j\x\x\t\p\c\e\f\l\4\r\s\i\g\j\d\z\y\b\a\y\w\z\v\q\s\w\m\i\h\9\s\p\r\v\s\0\m\5\4\g\k\c\9\x\u\o\4\f\a\k\z\1\b\v\0\a\c\e\2\b\t\f\1\9\5\p\a\h\g\8\s\o\8\y\0\j\l\0\i\r\t\u\4\r\r\t\b\4\7\i\7\c\m\p\j\0\x\o\y\f\g\l\x\4\x\8\p\x\k\z\4\6\s\g\u\8\b\n\2\5\y\i\5\d\u\7\n\5\6\n\k\m\e\7\d\u\k\t\4\z\o\9\3\h\f\w\2\f\n\4\i\1\v\4\w\v\0\c\v\p\x\a\i\y\o\j\w\z\j\p\n\u\s\o\r\j\b\7\0\z\g\2\9\g\j\7\b\m\g\q\6\t\n\m\9\r\p\2\h\v\h\o\p\0\v\m\t\u\b\n\t\g\h\6\w\z\x\0\u\7\9\u\7\e\u\8\7\x\5\s\3\g\m\a\p\g\a\8\r\u\3\y\h\n\a\5\2\x\b\p\j\h\f\r\w\x\2\p\o\v\v\0\b\v\e\8\0\u\m\0\o\t\e\a\9\f\6\j\8\c\9\e\o\h\s\6\h\6\o\v\c\6\3\4\5\b\j\r\g\o\5\b\6\6\p\w\r\c\v\s\o\d\c\5\o\a\u\o\3\m\8\l\a\p\8\8\j\b\r\o\6\m\1\l\y\r\6\t\k\j\y\f\1\g\6\l\n\3\u\i\i\5\4\l\z\8\x\i\1\p\v\w\t\w\p\o\c\c\c\v\0\a\q\y\7\o\k\b\a\t\h\h\d\4\2\c\i\h\f\y\p\h\l\f\q\i\x\w\q\d\k\l\l\4\k\m\o\9\5\w\f\t\x\y\x\e\t\o\r\6\m\q\j\l\x\o\9\b\v\k\m\3\r\9\f\r\v\7\w\e\3\d\e\u\8\u\e\k\4\n\p\b\l\g\2\y\p\8\t\5\l\z\z\0\3\c\5\w\9\f\a\g\v\9\6\5\r\y\3\s\6\a\r\i\a\p\1\7\1\u\5\r\2\b\4\n\4\l\m\h\p\w\z\m\a\t\j\8\e\2\9\t\x\c\k\7\4\l\z\q\9\q\a\j\p\8\e\r\0\j\1\3\0\p\2\8\k\k\u\5\u\q\w\y\p\a\0\3\h\3\0\l\w\y\m\7\d\0\7\f\5\g\m\d\y\g\6\n\g\i\5\f\8\q\l\f\p\b\w\m\e\d\i\w\j\3\z\r\m\f\n\s\x\5\w\r\s\c\7\r\e\c\e\9\b\9\t\i\e\v\a\b\4\e\r\j\u\3\i\a\6\b\g\t\2\k\h\r\8\q\z\1\m\b\y\i\f\p\8\7\2\q\e\0\f\x\7\6\e\h\s\s\g\8\w\1\e\7\j\4\i\7\q\4\n\a\a\i\6\y\l\t\c\s\x\m\4\t\y\t\q\y\j\b\w\z\0\r\5\l\x\j\c\g\t\7\f\4\e\t\a\2\6\2\g\f\g\k\j\q\c\b\l\u\1\i\i\n\q\a\v\p\b\6\1\n\x\a\w\i\q\s\p\a\z\g\x\m\d\y\r\k\6\x\z\r\o\e\o\k\w\1\q\8\a\g\p\h\z\n\v\s\d\c\v\f\w\a\0\2\a\y\s\l\j\k\1\w\p\k\7\n\w\d\8\8\f\1\q\q\3\v\o\t\b\4\m\4\2\1\i\b\5\r\s\o\s\7\r\a\f\u\f\7\v\b\o\v\1\p\3\f\r\1\o\l\p\s\t\d\f\o\r\2\z\t\a\p\5\m\s\e\z\3\r\y\w\n\z\3\v\4\j\v\2\p\q\4\k\p\q\x\p\6\m\o\7\k\n\i\8 ]] 00:09:09.408 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:10.010 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:10.010 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:10.010 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:10.010 13:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:10.010 [2024-10-01 13:43:19.947735] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:10.010 [2024-10-01 13:43:19.947989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61639 ] 00:09:10.010 { 00:09:10.010 "subsystems": [ 00:09:10.010 { 00:09:10.010 "subsystem": "bdev", 00:09:10.010 "config": [ 00:09:10.010 { 00:09:10.010 "params": { 00:09:10.010 "block_size": 512, 00:09:10.010 "num_blocks": 1048576, 00:09:10.010 "name": "malloc0" 00:09:10.010 }, 00:09:10.010 "method": "bdev_malloc_create" 00:09:10.010 }, 00:09:10.010 { 00:09:10.010 "params": { 00:09:10.010 "filename": "/dev/zram1", 00:09:10.010 "name": "uring0" 00:09:10.010 }, 00:09:10.010 "method": "bdev_uring_create" 00:09:10.010 }, 00:09:10.010 { 00:09:10.010 "method": "bdev_wait_for_examine" 00:09:10.010 } 00:09:10.010 ] 00:09:10.010 } 00:09:10.010 ] 00:09:10.010 } 00:09:10.010 [2024-10-01 13:43:20.083133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.268 [2024-10-01 13:43:20.200509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.268 [2024-10-01 13:43:20.257016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.645  Copying: 142/512 [MB] (142 MBps) Copying: 283/512 [MB] (141 MBps) Copying: 428/512 [MB] (144 MBps) Copying: 512/512 [MB] (average 143 MBps) 00:09:14.645 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:14.645 13:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:14.645 [2024-10-01 13:43:24.718702] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:14.645 [2024-10-01 13:43:24.718817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61701 ] 00:09:14.645 { 00:09:14.645 "subsystems": [ 00:09:14.645 { 00:09:14.645 "subsystem": "bdev", 00:09:14.645 "config": [ 00:09:14.645 { 00:09:14.645 "params": { 00:09:14.645 "block_size": 512, 00:09:14.645 "num_blocks": 1048576, 00:09:14.645 "name": "malloc0" 00:09:14.645 }, 00:09:14.645 "method": "bdev_malloc_create" 00:09:14.645 }, 00:09:14.645 { 00:09:14.645 "params": { 00:09:14.645 "filename": "/dev/zram1", 00:09:14.645 "name": "uring0" 00:09:14.645 }, 00:09:14.645 "method": "bdev_uring_create" 00:09:14.645 }, 00:09:14.645 { 00:09:14.645 "params": { 00:09:14.645 "name": "uring0" 00:09:14.645 }, 00:09:14.645 "method": "bdev_uring_delete" 00:09:14.645 }, 00:09:14.645 { 00:09:14.645 "method": "bdev_wait_for_examine" 00:09:14.645 } 00:09:14.645 ] 00:09:14.645 } 00:09:14.645 ] 00:09:14.645 } 00:09:14.904 [2024-10-01 13:43:24.856713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.904 [2024-10-01 13:43:25.005002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.904 [2024-10-01 13:43:25.062647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.727  Copying: 0/0 [B] (average 0 Bps) 00:09:15.727 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:15.727 13:43:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:15.727 [2024-10-01 13:43:25.796996] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:15.727 [2024-10-01 13:43:25.797147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61731 ] 00:09:15.727 { 00:09:15.727 "subsystems": [ 00:09:15.727 { 00:09:15.727 "subsystem": "bdev", 00:09:15.727 "config": [ 00:09:15.727 { 00:09:15.727 "params": { 00:09:15.727 "block_size": 512, 00:09:15.727 "num_blocks": 1048576, 00:09:15.727 "name": "malloc0" 00:09:15.727 }, 00:09:15.727 "method": "bdev_malloc_create" 00:09:15.727 }, 00:09:15.727 { 00:09:15.727 "params": { 00:09:15.727 "filename": "/dev/zram1", 00:09:15.727 "name": "uring0" 00:09:15.727 }, 00:09:15.727 "method": "bdev_uring_create" 00:09:15.727 }, 00:09:15.727 { 00:09:15.727 "params": { 00:09:15.727 "name": "uring0" 00:09:15.727 }, 00:09:15.727 "method": "bdev_uring_delete" 00:09:15.727 }, 00:09:15.727 { 00:09:15.727 "method": "bdev_wait_for_examine" 00:09:15.727 } 00:09:15.727 ] 00:09:15.727 } 00:09:15.727 ] 00:09:15.727 } 00:09:15.986 [2024-10-01 13:43:25.934516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.986 [2024-10-01 13:43:26.057480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.986 [2024-10-01 13:43:26.116412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.243 [2024-10-01 13:43:26.338228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:16.243 [2024-10-01 13:43:26.338295] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:16.243 [2024-10-01 13:43:26.338309] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:16.244 [2024-10-01 13:43:26.338321] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.502 [2024-10-01 13:43:26.655585] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:16.760 13:43:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:17.017 ************************************ 00:09:17.017 END TEST dd_uring_copy 00:09:17.017 ************************************ 00:09:17.017 00:09:17.017 real 0m17.447s 00:09:17.017 user 0m11.863s 00:09:17.017 sys 0m14.355s 00:09:17.017 13:43:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.017 13:43:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:17.017 00:09:17.017 real 0m17.703s 00:09:17.017 user 0m12.007s 00:09:17.017 sys 0m14.468s 00:09:17.017 13:43:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.017 13:43:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:17.017 ************************************ 00:09:17.017 END TEST spdk_dd_uring 00:09:17.017 ************************************ 00:09:17.017 13:43:27 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:17.017 13:43:27 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:17.017 13:43:27 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.017 13:43:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:17.017 ************************************ 00:09:17.017 START TEST spdk_dd_sparse 00:09:17.017 ************************************ 00:09:17.017 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:17.275 * Looking for test storage... 00:09:17.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:17.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.275 --rc genhtml_branch_coverage=1 00:09:17.275 --rc genhtml_function_coverage=1 00:09:17.275 --rc genhtml_legend=1 00:09:17.275 --rc geninfo_all_blocks=1 00:09:17.275 --rc geninfo_unexecuted_blocks=1 00:09:17.275 00:09:17.275 ' 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:17.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.275 --rc genhtml_branch_coverage=1 00:09:17.275 --rc genhtml_function_coverage=1 00:09:17.275 --rc genhtml_legend=1 00:09:17.275 --rc geninfo_all_blocks=1 00:09:17.275 --rc geninfo_unexecuted_blocks=1 00:09:17.275 00:09:17.275 ' 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:17.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.275 --rc genhtml_branch_coverage=1 00:09:17.275 --rc genhtml_function_coverage=1 00:09:17.275 --rc genhtml_legend=1 00:09:17.275 --rc geninfo_all_blocks=1 00:09:17.275 --rc geninfo_unexecuted_blocks=1 00:09:17.275 00:09:17.275 ' 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:17.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.275 --rc genhtml_branch_coverage=1 00:09:17.275 --rc genhtml_function_coverage=1 00:09:17.275 --rc genhtml_legend=1 00:09:17.275 --rc geninfo_all_blocks=1 00:09:17.275 --rc geninfo_unexecuted_blocks=1 00:09:17.275 00:09:17.275 ' 00:09:17.275 13:43:27 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:17.276 1+0 records in 00:09:17.276 1+0 records out 00:09:17.276 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00779943 s, 538 MB/s 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:17.276 1+0 records in 00:09:17.276 1+0 records out 00:09:17.276 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00628081 s, 668 MB/s 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:17.276 1+0 records in 00:09:17.276 1+0 records out 00:09:17.276 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00824251 s, 509 MB/s 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:17.276 ************************************ 00:09:17.276 START TEST dd_sparse_file_to_file 00:09:17.276 ************************************ 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:17.276 13:43:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:17.276 [2024-10-01 13:43:27.426959] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:17.276 [2024-10-01 13:43:27.427075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61835 ] 00:09:17.276 { 00:09:17.276 "subsystems": [ 00:09:17.276 { 00:09:17.276 "subsystem": "bdev", 00:09:17.276 "config": [ 00:09:17.276 { 00:09:17.276 "params": { 00:09:17.276 "block_size": 4096, 00:09:17.276 "filename": "dd_sparse_aio_disk", 00:09:17.276 "name": "dd_aio" 00:09:17.276 }, 00:09:17.276 "method": "bdev_aio_create" 00:09:17.276 }, 00:09:17.276 { 00:09:17.276 "params": { 00:09:17.276 "lvs_name": "dd_lvstore", 00:09:17.276 "bdev_name": "dd_aio" 00:09:17.276 }, 00:09:17.276 "method": "bdev_lvol_create_lvstore" 00:09:17.276 }, 00:09:17.276 { 00:09:17.276 "method": "bdev_wait_for_examine" 00:09:17.276 } 00:09:17.276 ] 00:09:17.276 } 00:09:17.276 ] 00:09:17.276 } 00:09:17.533 [2024-10-01 13:43:27.568307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.533 [2024-10-01 13:43:27.684221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.805 [2024-10-01 13:43:27.740863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.074  Copying: 12/36 [MB] (average 923 MBps) 00:09:18.074 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:18.074 00:09:18.074 real 0m0.747s 00:09:18.074 user 0m0.465s 00:09:18.074 sys 0m0.374s 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.074 ************************************ 00:09:18.074 END TEST dd_sparse_file_to_file 00:09:18.074 ************************************ 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.074 13:43:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:18.075 ************************************ 00:09:18.075 START TEST dd_sparse_file_to_bdev 00:09:18.075 ************************************ 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:18.075 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:18.075 [2024-10-01 13:43:28.227032] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:18.075 [2024-10-01 13:43:28.227130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61883 ] 00:09:18.075 { 00:09:18.075 "subsystems": [ 00:09:18.075 { 00:09:18.075 "subsystem": "bdev", 00:09:18.075 "config": [ 00:09:18.075 { 00:09:18.075 "params": { 00:09:18.075 "block_size": 4096, 00:09:18.075 "filename": "dd_sparse_aio_disk", 00:09:18.075 "name": "dd_aio" 00:09:18.075 }, 00:09:18.075 "method": "bdev_aio_create" 00:09:18.075 }, 00:09:18.075 { 00:09:18.075 "params": { 00:09:18.075 "lvs_name": "dd_lvstore", 00:09:18.075 "lvol_name": "dd_lvol", 00:09:18.075 "size_in_mib": 36, 00:09:18.075 "thin_provision": true 00:09:18.075 }, 00:09:18.075 "method": "bdev_lvol_create" 00:09:18.075 }, 00:09:18.075 { 00:09:18.075 "method": "bdev_wait_for_examine" 00:09:18.075 } 00:09:18.075 ] 00:09:18.075 } 00:09:18.075 ] 00:09:18.075 } 00:09:18.333 [2024-10-01 13:43:28.360433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.333 [2024-10-01 13:43:28.467522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.590 [2024-10-01 13:43:28.524746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.848  Copying: 12/36 [MB] (average 428 MBps) 00:09:18.848 00:09:18.848 00:09:18.848 real 0m0.699s 00:09:18.848 user 0m0.457s 00:09:18.848 sys 0m0.354s 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.848 ************************************ 00:09:18.848 END TEST dd_sparse_file_to_bdev 00:09:18.848 ************************************ 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:18.848 ************************************ 00:09:18.848 START TEST dd_sparse_bdev_to_file 00:09:18.848 ************************************ 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:18.848 13:43:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:18.848 { 00:09:18.848 "subsystems": [ 00:09:18.848 { 00:09:18.848 "subsystem": "bdev", 00:09:18.848 "config": [ 00:09:18.848 { 00:09:18.848 "params": { 00:09:18.848 "block_size": 4096, 00:09:18.848 "filename": "dd_sparse_aio_disk", 00:09:18.848 "name": "dd_aio" 00:09:18.848 }, 00:09:18.848 "method": "bdev_aio_create" 00:09:18.848 }, 00:09:18.848 { 00:09:18.848 "method": "bdev_wait_for_examine" 00:09:18.848 } 00:09:18.848 ] 00:09:18.848 } 00:09:18.848 ] 00:09:18.848 } 00:09:18.848 [2024-10-01 13:43:28.980176] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:18.848 [2024-10-01 13:43:28.980276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61910 ] 00:09:19.135 [2024-10-01 13:43:29.121267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.135 [2024-10-01 13:43:29.223116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.135 [2024-10-01 13:43:29.276224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.650  Copying: 12/36 [MB] (average 857 MBps) 00:09:19.650 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:19.650 00:09:19.650 real 0m0.712s 00:09:19.650 user 0m0.453s 00:09:19.650 sys 0m0.357s 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.650 ************************************ 00:09:19.650 END TEST dd_sparse_bdev_to_file 00:09:19.650 ************************************ 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:19.650 ************************************ 00:09:19.650 END TEST spdk_dd_sparse 00:09:19.650 ************************************ 00:09:19.650 00:09:19.650 real 0m2.546s 00:09:19.650 user 0m1.544s 00:09:19.650 sys 0m1.302s 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.650 13:43:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:19.650 13:43:29 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:19.650 13:43:29 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.650 13:43:29 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.650 13:43:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:19.650 ************************************ 00:09:19.650 START TEST spdk_dd_negative 00:09:19.650 ************************************ 00:09:19.650 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:19.909 * Looking for test storage... 00:09:19.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.909 --rc genhtml_branch_coverage=1 00:09:19.909 --rc genhtml_function_coverage=1 00:09:19.909 --rc genhtml_legend=1 00:09:19.909 --rc geninfo_all_blocks=1 00:09:19.909 --rc geninfo_unexecuted_blocks=1 00:09:19.909 00:09:19.909 ' 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.909 --rc genhtml_branch_coverage=1 00:09:19.909 --rc genhtml_function_coverage=1 00:09:19.909 --rc genhtml_legend=1 00:09:19.909 --rc geninfo_all_blocks=1 00:09:19.909 --rc geninfo_unexecuted_blocks=1 00:09:19.909 00:09:19.909 ' 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.909 --rc genhtml_branch_coverage=1 00:09:19.909 --rc genhtml_function_coverage=1 00:09:19.909 --rc genhtml_legend=1 00:09:19.909 --rc geninfo_all_blocks=1 00:09:19.909 --rc geninfo_unexecuted_blocks=1 00:09:19.909 00:09:19.909 ' 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.909 --rc genhtml_branch_coverage=1 00:09:19.909 --rc genhtml_function_coverage=1 00:09:19.909 --rc genhtml_legend=1 00:09:19.909 --rc geninfo_all_blocks=1 00:09:19.909 --rc geninfo_unexecuted_blocks=1 00:09:19.909 00:09:19.909 ' 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.909 13:43:29 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:19.910 ************************************ 00:09:19.910 START TEST dd_invalid_arguments 00:09:19.910 ************************************ 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:19.910 13:43:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:19.910 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:19.910 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:19.910 00:09:19.910 CPU options: 00:09:19.910 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:19.910 (like [0,1,10]) 00:09:19.910 --lcores lcore to CPU mapping list. The list is in the format: 00:09:19.910 [<,lcores[@CPUs]>...] 00:09:19.910 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:19.910 Within the group, '-' is used for range separator, 00:09:19.910 ',' is used for single number separator. 00:09:19.910 '( )' can be omitted for single element group, 00:09:19.910 '@' can be omitted if cpus and lcores have the same value 00:09:19.910 --disable-cpumask-locks Disable CPU core lock files. 00:09:19.910 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:19.910 pollers in the app support interrupt mode) 00:09:19.910 -p, --main-core main (primary) core for DPDK 00:09:19.910 00:09:19.910 Configuration options: 00:09:19.910 -c, --config, --json JSON config file 00:09:19.910 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:19.910 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:19.910 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:19.910 --rpcs-allowed comma-separated list of permitted RPCS 00:09:19.910 --json-ignore-init-errors don't exit on invalid config entry 00:09:19.910 00:09:19.910 Memory options: 00:09:19.910 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:19.910 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:19.910 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:19.910 -R, --huge-unlink unlink huge files after initialization 00:09:19.910 -n, --mem-channels number of memory channels used for DPDK 00:09:19.910 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:19.910 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:19.910 --no-huge run without using hugepages 00:09:19.910 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:19.910 -i, --shm-id shared memory ID (optional) 00:09:19.910 -g, --single-file-segments force creating just one hugetlbfs file 00:09:19.910 00:09:19.910 PCI options: 00:09:19.910 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:19.910 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:19.910 -u, --no-pci disable PCI access 00:09:19.910 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:19.910 00:09:19.910 Log options: 00:09:19.910 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:19.910 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:19.910 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:19.910 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:19.910 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:09:19.910 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:09:19.910 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:09:19.910 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:09:19.910 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:09:19.910 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:09:19.910 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:19.910 --silence-noticelog disable notice level logging to stderr 00:09:19.910 00:09:19.910 Trace options: 00:09:19.910 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:19.910 setting 0 to disable trace (default 32768) 00:09:19.910 Tracepoints vary in size and can use more than one trace entry. 00:09:19.910 -e, --tpoint-group [:] 00:09:19.910 [2024-10-01 13:43:30.018622] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:19.910 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:19.910 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:09:19.910 bdev_raid, all). 00:09:19.910 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:19.910 a tracepoint group. First tpoint inside a group can be enabled by 00:09:19.910 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:19.910 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:19.910 in /include/spdk_internal/trace_defs.h 00:09:19.910 00:09:19.910 Other options: 00:09:19.910 -h, --help show this usage 00:09:19.910 -v, --version print SPDK version 00:09:19.910 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:19.910 --env-context Opaque context for use of the env implementation 00:09:19.910 00:09:19.910 Application specific: 00:09:19.910 [--------- DD Options ---------] 00:09:19.910 --if Input file. Must specify either --if or --ib. 00:09:19.910 --ib Input bdev. Must specifier either --if or --ib 00:09:19.910 --of Output file. Must specify either --of or --ob. 00:09:19.910 --ob Output bdev. Must specify either --of or --ob. 00:09:19.910 --iflag Input file flags. 00:09:19.910 --oflag Output file flags. 00:09:19.910 --bs I/O unit size (default: 4096) 00:09:19.910 --qd Queue depth (default: 2) 00:09:19.910 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:19.910 --skip Skip this many I/O units at start of input. (default: 0) 00:09:19.910 --seek Skip this many I/O units at start of output. (default: 0) 00:09:19.910 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:19.910 --sparse Enable hole skipping in input target 00:09:19.910 Available iflag and oflag values: 00:09:19.911 append - append mode 00:09:19.911 direct - use direct I/O for data 00:09:19.911 directory - fail unless a directory 00:09:19.911 dsync - use synchronized I/O for data 00:09:19.911 noatime - do not update access time 00:09:19.911 noctty - do not assign controlling terminal from file 00:09:19.911 nofollow - do not follow symlinks 00:09:19.911 nonblock - use non-blocking I/O 00:09:19.911 sync - use synchronized I/O for data and metadata 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:09:19.911 ************************************ 00:09:19.911 END TEST dd_invalid_arguments 00:09:19.911 ************************************ 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.911 00:09:19.911 real 0m0.078s 00:09:19.911 user 0m0.050s 00:09:19.911 sys 0m0.025s 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.911 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.169 ************************************ 00:09:20.169 START TEST dd_double_input 00:09:20.169 ************************************ 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:20.169 [2024-10-01 13:43:30.137937] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.169 00:09:20.169 real 0m0.065s 00:09:20.169 user 0m0.036s 00:09:20.169 sys 0m0.028s 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:20.169 ************************************ 00:09:20.169 END TEST dd_double_input 00:09:20.169 ************************************ 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.169 ************************************ 00:09:20.169 START TEST dd_double_output 00:09:20.169 ************************************ 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:20.169 [2024-10-01 13:43:30.262903] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.169 00:09:20.169 real 0m0.077s 00:09:20.169 user 0m0.046s 00:09:20.169 sys 0m0.031s 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:20.169 ************************************ 00:09:20.169 END TEST dd_double_output 00:09:20.169 ************************************ 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.169 ************************************ 00:09:20.169 START TEST dd_no_input 00:09:20.169 ************************************ 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:09:20.169 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.170 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:20.428 [2024-10-01 13:43:30.394420] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.428 00:09:20.428 real 0m0.077s 00:09:20.428 user 0m0.049s 00:09:20.428 sys 0m0.027s 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:20.428 ************************************ 00:09:20.428 END TEST dd_no_input 00:09:20.428 ************************************ 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.428 ************************************ 00:09:20.428 START TEST dd_no_output 00:09:20.428 ************************************ 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.428 [2024-10-01 13:43:30.525971] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.428 00:09:20.428 real 0m0.078s 00:09:20.428 user 0m0.051s 00:09:20.428 sys 0m0.026s 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:20.428 ************************************ 00:09:20.428 END TEST dd_no_output 00:09:20.428 ************************************ 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.428 ************************************ 00:09:20.428 START TEST dd_wrong_blocksize 00:09:20.428 ************************************ 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.428 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:20.686 [2024-10-01 13:43:30.653078] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:20.686 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:09:20.686 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.686 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.686 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.686 00:09:20.686 real 0m0.077s 00:09:20.686 user 0m0.050s 00:09:20.686 sys 0m0.027s 00:09:20.686 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.686 13:43:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:20.686 ************************************ 00:09:20.687 END TEST dd_wrong_blocksize 00:09:20.687 ************************************ 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.687 ************************************ 00:09:20.687 START TEST dd_smaller_blocksize 00:09:20.687 ************************************ 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.687 13:43:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:20.687 [2024-10-01 13:43:30.780301] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:20.687 [2024-10-01 13:43:30.780416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62142 ] 00:09:20.944 [2024-10-01 13:43:30.914063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.944 [2024-10-01 13:43:31.031265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.944 [2024-10-01 13:43:31.085538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.510 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:21.510 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:21.768 [2024-10-01 13:43:31.694070] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:21.768 [2024-10-01 13:43:31.694154] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.768 [2024-10-01 13:43:31.817656] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.768 00:09:21.768 real 0m1.200s 00:09:21.768 user 0m0.472s 00:09:21.768 sys 0m0.619s 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.768 13:43:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:21.768 ************************************ 00:09:21.768 END TEST dd_smaller_blocksize 00:09:21.768 ************************************ 00:09:22.027 13:43:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:22.028 ************************************ 00:09:22.028 START TEST dd_invalid_count 00:09:22.028 ************************************ 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.028 13:43:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:22.028 [2024-10-01 13:43:32.057581] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.028 00:09:22.028 real 0m0.105s 00:09:22.028 user 0m0.065s 00:09:22.028 sys 0m0.037s 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:22.028 ************************************ 00:09:22.028 END TEST dd_invalid_count 00:09:22.028 ************************************ 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:22.028 ************************************ 00:09:22.028 START TEST dd_invalid_oflag 00:09:22.028 ************************************ 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.028 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:22.028 [2024-10-01 13:43:32.193254] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.286 00:09:22.286 real 0m0.077s 00:09:22.286 user 0m0.050s 00:09:22.286 sys 0m0.026s 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:22.286 ************************************ 00:09:22.286 END TEST dd_invalid_oflag 00:09:22.286 ************************************ 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:22.286 ************************************ 00:09:22.286 START TEST dd_invalid_iflag 00:09:22.286 ************************************ 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:22.286 [2024-10-01 13:43:32.318250] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.286 00:09:22.286 real 0m0.078s 00:09:22.286 user 0m0.053s 00:09:22.286 sys 0m0.023s 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:22.286 ************************************ 00:09:22.286 END TEST dd_invalid_iflag 00:09:22.286 ************************************ 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:22.286 ************************************ 00:09:22.286 START TEST dd_unknown_flag 00:09:22.286 ************************************ 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.286 13:43:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:22.286 [2024-10-01 13:43:32.443202] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:22.286 [2024-10-01 13:43:32.443305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62245 ] 00:09:22.576 [2024-10-01 13:43:32.578956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.576 [2024-10-01 13:43:32.714843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.834 [2024-10-01 13:43:32.775943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.834 [2024-10-01 13:43:32.817244] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:22.834 [2024-10-01 13:43:32.817338] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.834 [2024-10-01 13:43:32.817408] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:22.834 [2024-10-01 13:43:32.817426] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.834 [2024-10-01 13:43:32.817691] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:22.834 [2024-10-01 13:43:32.817724] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.834 [2024-10-01 13:43:32.817788] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:22.834 [2024-10-01 13:43:32.817801] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:22.834 [2024-10-01 13:43:32.945017] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.092 00:09:23.092 real 0m0.678s 00:09:23.092 user 0m0.403s 00:09:23.092 sys 0m0.177s 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:23.092 ************************************ 00:09:23.092 END TEST dd_unknown_flag 00:09:23.092 ************************************ 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:23.092 ************************************ 00:09:23.092 START TEST dd_invalid_json 00:09:23.092 ************************************ 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.092 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:23.092 [2024-10-01 13:43:33.174161] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:23.092 [2024-10-01 13:43:33.174323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62268 ] 00:09:23.351 [2024-10-01 13:43:33.312775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.351 [2024-10-01 13:43:33.443751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.351 [2024-10-01 13:43:33.443858] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:23.351 [2024-10-01 13:43:33.443878] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:23.351 [2024-10-01 13:43:33.443891] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.351 [2024-10-01 13:43:33.443957] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.610 00:09:23.610 real 0m0.450s 00:09:23.610 user 0m0.272s 00:09:23.610 sys 0m0.075s 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:23.610 ************************************ 00:09:23.610 END TEST dd_invalid_json 00:09:23.610 ************************************ 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:23.610 ************************************ 00:09:23.610 START TEST dd_invalid_seek 00:09:23.610 ************************************ 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.610 13:43:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:23.610 { 00:09:23.610 "subsystems": [ 00:09:23.610 { 00:09:23.610 "subsystem": "bdev", 00:09:23.610 "config": [ 00:09:23.610 { 00:09:23.610 "params": { 00:09:23.610 "block_size": 512, 00:09:23.610 "num_blocks": 512, 00:09:23.610 "name": "malloc0" 00:09:23.610 }, 00:09:23.610 "method": "bdev_malloc_create" 00:09:23.610 }, 00:09:23.610 { 00:09:23.610 "params": { 00:09:23.610 "block_size": 512, 00:09:23.610 "num_blocks": 512, 00:09:23.610 "name": "malloc1" 00:09:23.610 }, 00:09:23.610 "method": "bdev_malloc_create" 00:09:23.610 }, 00:09:23.610 { 00:09:23.610 "method": "bdev_wait_for_examine" 00:09:23.610 } 00:09:23.610 ] 00:09:23.610 } 00:09:23.610 ] 00:09:23.610 } 00:09:23.610 [2024-10-01 13:43:33.677824] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:23.610 [2024-10-01 13:43:33.677960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:09:23.869 [2024-10-01 13:43:33.818756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.869 [2024-10-01 13:43:33.941177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.869 [2024-10-01 13:43:34.000215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.128 [2024-10-01 13:43:34.064761] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:24.128 [2024-10-01 13:43:34.064841] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.128 [2024-10-01 13:43:34.189007] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.128 ************************************ 00:09:24.128 END TEST dd_invalid_seek 00:09:24.128 ************************************ 00:09:24.128 00:09:24.128 real 0m0.680s 00:09:24.128 user 0m0.460s 00:09:24.128 sys 0m0.177s 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.128 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:24.386 13:43:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:24.386 13:43:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.386 13:43:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.386 13:43:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:24.386 ************************************ 00:09:24.386 START TEST dd_invalid_skip 00:09:24.386 ************************************ 00:09:24.386 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:09:24.386 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:24.387 13:43:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:24.387 { 00:09:24.387 "subsystems": [ 00:09:24.387 { 00:09:24.387 "subsystem": "bdev", 00:09:24.387 "config": [ 00:09:24.387 { 00:09:24.387 "params": { 00:09:24.387 "block_size": 512, 00:09:24.387 "num_blocks": 512, 00:09:24.387 "name": "malloc0" 00:09:24.387 }, 00:09:24.387 "method": "bdev_malloc_create" 00:09:24.387 }, 00:09:24.387 { 00:09:24.387 "params": { 00:09:24.387 "block_size": 512, 00:09:24.387 "num_blocks": 512, 00:09:24.387 "name": "malloc1" 00:09:24.387 }, 00:09:24.387 "method": "bdev_malloc_create" 00:09:24.387 }, 00:09:24.387 { 00:09:24.387 "method": "bdev_wait_for_examine" 00:09:24.387 } 00:09:24.387 ] 00:09:24.387 } 00:09:24.387 ] 00:09:24.387 } 00:09:24.387 [2024-10-01 13:43:34.401053] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:24.387 [2024-10-01 13:43:34.401647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62342 ] 00:09:24.387 [2024-10-01 13:43:34.539380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.647 [2024-10-01 13:43:34.665726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.647 [2024-10-01 13:43:34.724980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.647 [2024-10-01 13:43:34.793604] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:24.647 [2024-10-01 13:43:34.793671] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.906 [2024-10-01 13:43:34.921587] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.906 00:09:24.906 real 0m0.690s 00:09:24.906 user 0m0.473s 00:09:24.906 sys 0m0.167s 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:24.906 ************************************ 00:09:24.906 END TEST dd_invalid_skip 00:09:24.906 ************************************ 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.906 13:43:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.164 ************************************ 00:09:25.164 START TEST dd_invalid_input_count 00:09:25.164 ************************************ 00:09:25.164 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:09:25.164 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:25.164 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:25.164 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:25.164 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:25.164 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:25.164 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.165 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:25.165 { 00:09:25.165 "subsystems": [ 00:09:25.165 { 00:09:25.165 "subsystem": "bdev", 00:09:25.165 "config": [ 00:09:25.165 { 00:09:25.165 "params": { 00:09:25.165 "block_size": 512, 00:09:25.165 "num_blocks": 512, 00:09:25.165 "name": "malloc0" 00:09:25.165 }, 00:09:25.165 "method": "bdev_malloc_create" 00:09:25.165 }, 00:09:25.165 { 00:09:25.165 "params": { 00:09:25.165 "block_size": 512, 00:09:25.165 "num_blocks": 512, 00:09:25.165 "name": "malloc1" 00:09:25.165 }, 00:09:25.165 "method": "bdev_malloc_create" 00:09:25.165 }, 00:09:25.165 { 00:09:25.165 "method": "bdev_wait_for_examine" 00:09:25.165 } 00:09:25.165 ] 00:09:25.165 } 00:09:25.165 ] 00:09:25.165 } 00:09:25.165 [2024-10-01 13:43:35.158972] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:25.165 [2024-10-01 13:43:35.159099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62370 ] 00:09:25.165 [2024-10-01 13:43:35.297448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.425 [2024-10-01 13:43:35.471642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.425 [2024-10-01 13:43:35.555818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.689 [2024-10-01 13:43:35.639179] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:25.689 [2024-10-01 13:43:35.639259] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:25.689 [2024-10-01 13:43:35.829950] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.948 00:09:25.948 real 0m0.893s 00:09:25.948 user 0m0.606s 00:09:25.948 sys 0m0.248s 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.948 13:43:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:25.948 ************************************ 00:09:25.948 END TEST dd_invalid_input_count 00:09:25.948 ************************************ 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.948 ************************************ 00:09:25.948 START TEST dd_invalid_output_count 00:09:25.948 ************************************ 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.948 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:25.948 { 00:09:25.948 "subsystems": [ 00:09:25.948 { 00:09:25.948 "subsystem": "bdev", 00:09:25.948 "config": [ 00:09:25.948 { 00:09:25.948 "params": { 00:09:25.948 "block_size": 512, 00:09:25.948 "num_blocks": 512, 00:09:25.948 "name": "malloc0" 00:09:25.948 }, 00:09:25.948 "method": "bdev_malloc_create" 00:09:25.948 }, 00:09:25.948 { 00:09:25.948 "method": "bdev_wait_for_examine" 00:09:25.948 } 00:09:25.948 ] 00:09:25.948 } 00:09:25.948 ] 00:09:25.948 } 00:09:25.948 [2024-10-01 13:43:36.112736] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:25.948 [2024-10-01 13:43:36.112850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62410 ] 00:09:26.214 [2024-10-01 13:43:36.255681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.474 [2024-10-01 13:43:36.429518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.474 [2024-10-01 13:43:36.514639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.474 [2024-10-01 13:43:36.591557] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:26.474 [2024-10-01 13:43:36.591636] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.733 [2024-10-01 13:43:36.778945] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:26.991 00:09:26.991 real 0m0.881s 00:09:26.991 user 0m0.592s 00:09:26.991 sys 0m0.242s 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:26.991 ************************************ 00:09:26.991 END TEST dd_invalid_output_count 00:09:26.991 ************************************ 00:09:26.991 13:43:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 ************************************ 00:09:26.992 START TEST dd_bs_not_multiple 00:09:26.992 ************************************ 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:26.992 13:43:36 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:26.992 { 00:09:26.992 "subsystems": [ 00:09:26.992 { 00:09:26.992 "subsystem": "bdev", 00:09:26.992 "config": [ 00:09:26.992 { 00:09:26.992 "params": { 00:09:26.992 "block_size": 512, 00:09:26.992 "num_blocks": 512, 00:09:26.992 "name": "malloc0" 00:09:26.992 }, 00:09:26.992 "method": "bdev_malloc_create" 00:09:26.992 }, 00:09:26.992 { 00:09:26.992 "params": { 00:09:26.992 "block_size": 512, 00:09:26.992 "num_blocks": 512, 00:09:26.992 "name": "malloc1" 00:09:26.992 }, 00:09:26.992 "method": "bdev_malloc_create" 00:09:26.992 }, 00:09:26.992 { 00:09:26.992 "method": "bdev_wait_for_examine" 00:09:26.992 } 00:09:26.992 ] 00:09:26.992 } 00:09:26.992 ] 00:09:26.992 } 00:09:26.992 [2024-10-01 13:43:37.039535] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:26.992 [2024-10-01 13:43:37.039658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62446 ] 00:09:27.251 [2024-10-01 13:43:37.176552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.251 [2024-10-01 13:43:37.349837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.509 [2024-10-01 13:43:37.434977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.509 [2024-10-01 13:43:37.520096] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:27.509 [2024-10-01 13:43:37.520186] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:27.768 [2024-10-01 13:43:37.718290] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.768 00:09:27.768 real 0m0.894s 00:09:27.768 user 0m0.619s 00:09:27.768 sys 0m0.229s 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:27.768 ************************************ 00:09:27.768 END TEST dd_bs_not_multiple 00:09:27.768 ************************************ 00:09:27.768 00:09:27.768 real 0m8.164s 00:09:27.768 user 0m4.740s 00:09:27.768 sys 0m2.824s 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.768 13:43:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:27.768 ************************************ 00:09:27.768 END TEST spdk_dd_negative 00:09:27.768 ************************************ 00:09:28.027 00:09:28.027 real 1m31.968s 00:09:28.027 user 1m1.038s 00:09:28.027 sys 0m38.255s 00:09:28.027 13:43:37 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.027 13:43:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 ************************************ 00:09:28.027 END TEST spdk_dd 00:09:28.027 ************************************ 00:09:28.027 13:43:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:28.027 13:43:37 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:28.027 13:43:37 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:28.027 13:43:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.027 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 13:43:38 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:28.027 13:43:38 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:28.027 13:43:38 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:28.027 13:43:38 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:28.027 13:43:38 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:28.027 13:43:38 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:28.027 13:43:38 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:28.027 13:43:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.027 13:43:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.027 13:43:38 -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 ************************************ 00:09:28.027 START TEST nvmf_tcp 00:09:28.027 ************************************ 00:09:28.027 13:43:38 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:28.027 * Looking for test storage... 00:09:28.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:28.027 13:43:38 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:28.027 13:43:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:28.027 13:43:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.287 13:43:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:28.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.287 --rc genhtml_branch_coverage=1 00:09:28.287 --rc genhtml_function_coverage=1 00:09:28.287 --rc genhtml_legend=1 00:09:28.287 --rc geninfo_all_blocks=1 00:09:28.287 --rc geninfo_unexecuted_blocks=1 00:09:28.287 00:09:28.287 ' 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:28.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.287 --rc genhtml_branch_coverage=1 00:09:28.287 --rc genhtml_function_coverage=1 00:09:28.287 --rc genhtml_legend=1 00:09:28.287 --rc geninfo_all_blocks=1 00:09:28.287 --rc geninfo_unexecuted_blocks=1 00:09:28.287 00:09:28.287 ' 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:28.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.287 --rc genhtml_branch_coverage=1 00:09:28.287 --rc genhtml_function_coverage=1 00:09:28.287 --rc genhtml_legend=1 00:09:28.287 --rc geninfo_all_blocks=1 00:09:28.287 --rc geninfo_unexecuted_blocks=1 00:09:28.287 00:09:28.287 ' 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:28.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.287 --rc genhtml_branch_coverage=1 00:09:28.287 --rc genhtml_function_coverage=1 00:09:28.287 --rc genhtml_legend=1 00:09:28.287 --rc geninfo_all_blocks=1 00:09:28.287 --rc geninfo_unexecuted_blocks=1 00:09:28.287 00:09:28.287 ' 00:09:28.287 13:43:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:28.287 13:43:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:28.287 13:43:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.287 13:43:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 ************************************ 00:09:28.287 START TEST nvmf_target_core 00:09:28.287 ************************************ 00:09:28.287 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:28.287 * Looking for test storage... 00:09:28.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:28.287 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:28.287 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:09:28.287 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.575 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.576 --rc genhtml_branch_coverage=1 00:09:28.576 --rc genhtml_function_coverage=1 00:09:28.576 --rc genhtml_legend=1 00:09:28.576 --rc geninfo_all_blocks=1 00:09:28.576 --rc geninfo_unexecuted_blocks=1 00:09:28.576 00:09:28.576 ' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.576 --rc genhtml_branch_coverage=1 00:09:28.576 --rc genhtml_function_coverage=1 00:09:28.576 --rc genhtml_legend=1 00:09:28.576 --rc geninfo_all_blocks=1 00:09:28.576 --rc geninfo_unexecuted_blocks=1 00:09:28.576 00:09:28.576 ' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.576 --rc genhtml_branch_coverage=1 00:09:28.576 --rc genhtml_function_coverage=1 00:09:28.576 --rc genhtml_legend=1 00:09:28.576 --rc geninfo_all_blocks=1 00:09:28.576 --rc geninfo_unexecuted_blocks=1 00:09:28.576 00:09:28.576 ' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.576 --rc genhtml_branch_coverage=1 00:09:28.576 --rc genhtml_function_coverage=1 00:09:28.576 --rc genhtml_legend=1 00:09:28.576 --rc geninfo_all_blocks=1 00:09:28.576 --rc geninfo_unexecuted_blocks=1 00:09:28.576 00:09:28.576 ' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.576 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.576 ************************************ 00:09:28.576 START TEST nvmf_host_management 00:09:28.576 ************************************ 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:28.576 * Looking for test storage... 00:09:28.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:09:28.576 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.577 --rc genhtml_branch_coverage=1 00:09:28.577 --rc genhtml_function_coverage=1 00:09:28.577 --rc genhtml_legend=1 00:09:28.577 --rc geninfo_all_blocks=1 00:09:28.577 --rc geninfo_unexecuted_blocks=1 00:09:28.577 00:09:28.577 ' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.577 --rc genhtml_branch_coverage=1 00:09:28.577 --rc genhtml_function_coverage=1 00:09:28.577 --rc genhtml_legend=1 00:09:28.577 --rc geninfo_all_blocks=1 00:09:28.577 --rc geninfo_unexecuted_blocks=1 00:09:28.577 00:09:28.577 ' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.577 --rc genhtml_branch_coverage=1 00:09:28.577 --rc genhtml_function_coverage=1 00:09:28.577 --rc genhtml_legend=1 00:09:28.577 --rc geninfo_all_blocks=1 00:09:28.577 --rc geninfo_unexecuted_blocks=1 00:09:28.577 00:09:28.577 ' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.577 --rc genhtml_branch_coverage=1 00:09:28.577 --rc genhtml_function_coverage=1 00:09:28.577 --rc genhtml_legend=1 00:09:28.577 --rc geninfo_all_blocks=1 00:09:28.577 --rc geninfo_unexecuted_blocks=1 00:09:28.577 00:09:28.577 ' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.577 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.578 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:28.578 Cannot find device "nvmf_init_br" 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:28.578 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:28.837 Cannot find device "nvmf_init_br2" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:28.837 Cannot find device "nvmf_tgt_br" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.837 Cannot find device "nvmf_tgt_br2" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:28.837 Cannot find device "nvmf_init_br" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:28.837 Cannot find device "nvmf_init_br2" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:28.837 Cannot find device "nvmf_tgt_br" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:28.837 Cannot find device "nvmf_tgt_br2" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:28.837 Cannot find device "nvmf_br" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:28.837 Cannot find device "nvmf_init_if" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:28.837 Cannot find device "nvmf_init_if2" 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:28.837 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:28.837 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.098 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:29.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.200 ms 00:09:29.099 00:09:29.099 --- 10.0.0.3 ping statistics --- 00:09:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.099 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:29.099 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:29.099 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:09:29.099 00:09:29.099 --- 10.0.0.4 ping statistics --- 00:09:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.099 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:29.099 00:09:29.099 --- 10.0.0.1 ping statistics --- 00:09:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.099 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:29.099 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:29.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:09:29.359 00:09:29.359 --- 10.0.0.2 ping statistics --- 00:09:29.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.359 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=62788 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 62788 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62788 ']' 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.359 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.359 [2024-10-01 13:43:39.373245] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:29.359 [2024-10-01 13:43:39.373381] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.359 [2024-10-01 13:43:39.517035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.618 [2024-10-01 13:43:39.654718] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.618 [2024-10-01 13:43:39.654779] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.618 [2024-10-01 13:43:39.654794] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.618 [2024-10-01 13:43:39.654804] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.618 [2024-10-01 13:43:39.654813] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.618 [2024-10-01 13:43:39.654994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.618 [2024-10-01 13:43:39.655494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.618 [2024-10-01 13:43:39.655632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.618 [2024-10-01 13:43:39.655639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.618 [2024-10-01 13:43:39.713902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.554 [2024-10-01 13:43:40.480741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.554 Malloc0 00:09:30.554 [2024-10-01 13:43:40.547816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62848 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62848 /var/tmp/bdevperf.sock 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62848 ']' 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:30.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.554 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:30.555 { 00:09:30.555 "params": { 00:09:30.555 "name": "Nvme$subsystem", 00:09:30.555 "trtype": "$TEST_TRANSPORT", 00:09:30.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.555 "adrfam": "ipv4", 00:09:30.555 "trsvcid": "$NVMF_PORT", 00:09:30.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.555 "hdgst": ${hdgst:-false}, 00:09:30.555 "ddgst": ${ddgst:-false} 00:09:30.555 }, 00:09:30.555 "method": "bdev_nvme_attach_controller" 00:09:30.555 } 00:09:30.555 EOF 00:09:30.555 )") 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:30.555 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:30.555 "params": { 00:09:30.555 "name": "Nvme0", 00:09:30.555 "trtype": "tcp", 00:09:30.555 "traddr": "10.0.0.3", 00:09:30.555 "adrfam": "ipv4", 00:09:30.555 "trsvcid": "4420", 00:09:30.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:30.555 "hdgst": false, 00:09:30.555 "ddgst": false 00:09:30.555 }, 00:09:30.555 "method": "bdev_nvme_attach_controller" 00:09:30.555 }' 00:09:30.555 [2024-10-01 13:43:40.655497] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:30.555 [2024-10-01 13:43:40.655588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62848 ] 00:09:30.813 [2024-10-01 13:43:40.796581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.813 [2024-10-01 13:43:40.919676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.813 [2024-10-01 13:43:40.985121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.071 Running I/O for 10 seconds... 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:31.637 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.896 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.896 [2024-10-01 13:43:41.821336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.896 [2024-10-01 13:43:41.821386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.896 [2024-10-01 13:43:41.821411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.896 [2024-10-01 13:43:41.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.896 [2024-10-01 13:43:41.821434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.896 [2024-10-01 13:43:41.821443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.896 [2024-10-01 13:43:41.821455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.896 [2024-10-01 13:43:41.821465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.896 [2024-10-01 13:43:41.821477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.896 [2024-10-01 13:43:41.821486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.896 [2024-10-01 13:43:41.821498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.896 [2024-10-01 13:43:41.821507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.896 [2024-10-01 13:43:41.821519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.896 [2024-10-01 13:43:41.821528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.821982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.821992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.897 [2024-10-01 13:43:41.822272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.897 [2024-10-01 13:43:41.822293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.898 [2024-10-01 13:43:41.822813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.898 [2024-10-01 13:43:41.822824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168e6b0 is same with the state(6) to be set 00:09:31.898 [2024-10-01 13:43:41.822899] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x168e6b0 was disconnected and freed. reset controller. 00:09:31.898 [2024-10-01 13:43:41.824033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:31.898 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.898 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:31.898 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.898 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.898 task offset: 512 on job bdev=Nvme0n1 fails 00:09:31.898 00:09:31.898 Latency(us) 00:09:31.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.898 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:31.898 Job: Nvme0n1 ended in about 0.71 seconds with error 00:09:31.898 Verification LBA range: start 0x0 length 0x400 00:09:31.898 Nvme0n1 : 0.71 1433.32 89.58 89.58 0.00 41014.57 2353.34 39559.91 00:09:31.898 =================================================================================================================== 00:09:31.898 Total : 1433.32 89.58 89.58 0.00 41014.57 2353.34 39559.91 00:09:31.898 [2024-10-01 13:43:41.826218] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.898 [2024-10-01 13:43:41.826251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb20 (9): Bad file descriptor 00:09:31.898 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.898 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:31.898 [2024-10-01 13:43:41.836169] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62848 00:09:32.833 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62848) - No such process 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:32.833 { 00:09:32.833 "params": { 00:09:32.833 "name": "Nvme$subsystem", 00:09:32.833 "trtype": "$TEST_TRANSPORT", 00:09:32.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.833 "adrfam": "ipv4", 00:09:32.833 "trsvcid": "$NVMF_PORT", 00:09:32.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.833 "hdgst": ${hdgst:-false}, 00:09:32.833 "ddgst": ${ddgst:-false} 00:09:32.833 }, 00:09:32.833 "method": "bdev_nvme_attach_controller" 00:09:32.833 } 00:09:32.833 EOF 00:09:32.833 )") 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:32.833 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:32.833 "params": { 00:09:32.833 "name": "Nvme0", 00:09:32.833 "trtype": "tcp", 00:09:32.833 "traddr": "10.0.0.3", 00:09:32.833 "adrfam": "ipv4", 00:09:32.833 "trsvcid": "4420", 00:09:32.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:32.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:32.834 "hdgst": false, 00:09:32.834 "ddgst": false 00:09:32.834 }, 00:09:32.834 "method": "bdev_nvme_attach_controller" 00:09:32.834 }' 00:09:32.834 [2024-10-01 13:43:42.889896] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:32.834 [2024-10-01 13:43:42.890010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62891 ] 00:09:33.092 [2024-10-01 13:43:43.024632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.092 [2024-10-01 13:43:43.162441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.092 [2024-10-01 13:43:43.229238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.351 Running I/O for 1 seconds... 00:09:34.287 1280.00 IOPS, 80.00 MiB/s 00:09:34.287 Latency(us) 00:09:34.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.287 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:34.287 Verification LBA range: start 0x0 length 0x400 00:09:34.287 Nvme0n1 : 1.01 1328.98 83.06 0.00 0.00 47033.65 7626.01 45517.73 00:09:34.287 =================================================================================================================== 00:09:34.287 Total : 1328.98 83.06 0.00 0.00 47033.65 7626.01 45517.73 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.853 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.853 rmmod nvme_tcp 00:09:34.853 rmmod nvme_fabrics 00:09:34.853 rmmod nvme_keyring 00:09:34.853 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 62788 ']' 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 62788 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62788 ']' 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62788 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62788 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:35.110 killing process with pid 62788 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62788' 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62788 00:09:35.110 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62788 00:09:35.368 [2024-10-01 13:43:45.316390] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:35.368 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.369 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:35.627 ************************************ 00:09:35.627 END TEST nvmf_host_management 00:09:35.627 00:09:35.627 real 0m7.078s 00:09:35.627 user 0m25.955s 00:09:35.627 sys 0m1.717s 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.627 ************************************ 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.627 ************************************ 00:09:35.627 START TEST nvmf_lvol 00:09:35.627 ************************************ 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:35.627 * Looking for test storage... 00:09:35.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.627 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:35.885 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.886 --rc genhtml_branch_coverage=1 00:09:35.886 --rc genhtml_function_coverage=1 00:09:35.886 --rc genhtml_legend=1 00:09:35.886 --rc geninfo_all_blocks=1 00:09:35.886 --rc geninfo_unexecuted_blocks=1 00:09:35.886 00:09:35.886 ' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.886 --rc genhtml_branch_coverage=1 00:09:35.886 --rc genhtml_function_coverage=1 00:09:35.886 --rc genhtml_legend=1 00:09:35.886 --rc geninfo_all_blocks=1 00:09:35.886 --rc geninfo_unexecuted_blocks=1 00:09:35.886 00:09:35.886 ' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.886 --rc genhtml_branch_coverage=1 00:09:35.886 --rc genhtml_function_coverage=1 00:09:35.886 --rc genhtml_legend=1 00:09:35.886 --rc geninfo_all_blocks=1 00:09:35.886 --rc geninfo_unexecuted_blocks=1 00:09:35.886 00:09:35.886 ' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.886 --rc genhtml_branch_coverage=1 00:09:35.886 --rc genhtml_function_coverage=1 00:09:35.886 --rc genhtml_legend=1 00:09:35.886 --rc geninfo_all_blocks=1 00:09:35.886 --rc geninfo_unexecuted_blocks=1 00:09:35.886 00:09:35.886 ' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.886 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.886 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.887 Cannot find device "nvmf_init_br" 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.887 Cannot find device "nvmf_init_br2" 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:35.887 Cannot find device "nvmf_tgt_br" 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.887 Cannot find device "nvmf_tgt_br2" 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.887 Cannot find device "nvmf_init_br" 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.887 Cannot find device "nvmf_init_br2" 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:35.887 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.887 Cannot find device "nvmf_tgt_br" 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.887 Cannot find device "nvmf_tgt_br2" 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.887 Cannot find device "nvmf_br" 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.887 Cannot find device "nvmf_init_if" 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.887 Cannot find device "nvmf_init_if2" 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:35.887 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.145 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:36.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:36.404 00:09:36.404 --- 10.0.0.3 ping statistics --- 00:09:36.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.404 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:36.404 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:36.404 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:09:36.404 00:09:36.404 --- 10.0.0.4 ping statistics --- 00:09:36.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.404 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:09:36.404 00:09:36.404 --- 10.0.0.1 ping statistics --- 00:09:36.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.404 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:36.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:36.404 00:09:36.404 --- 10.0.0.2 ping statistics --- 00:09:36.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.404 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=63158 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 63158 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 63158 ']' 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.404 13:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:36.404 [2024-10-01 13:43:46.446201] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:36.404 [2024-10-01 13:43:46.446631] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.709 [2024-10-01 13:43:46.592690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:36.709 [2024-10-01 13:43:46.726057] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.709 [2024-10-01 13:43:46.726377] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.709 [2024-10-01 13:43:46.726580] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.709 [2024-10-01 13:43:46.726738] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.709 [2024-10-01 13:43:46.726785] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.709 [2024-10-01 13:43:46.727082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.709 [2024-10-01 13:43:46.727178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.709 [2024-10-01 13:43:46.727185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.709 [2024-10-01 13:43:46.786446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.642 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.642 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:37.642 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:37.642 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:37.642 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:37.642 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.642 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:37.901 [2024-10-01 13:43:47.848619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.901 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.159 13:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:38.159 13:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.727 13:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:38.727 13:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:38.991 13:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:39.251 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3629283b-a74b-4113-a6bb-c633c18440df 00:09:39.251 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3629283b-a74b-4113-a6bb-c633c18440df lvol 20 00:09:39.509 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=73f9b840-aeea-47d1-a1b2-3e7539ea9e34 00:09:39.509 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:40.077 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73f9b840-aeea-47d1-a1b2-3e7539ea9e34 00:09:40.366 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:40.624 [2024-10-01 13:43:50.626578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:40.624 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:40.883 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:40.883 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63245 00:09:40.883 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:42.260 13:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 73f9b840-aeea-47d1-a1b2-3e7539ea9e34 MY_SNAPSHOT 00:09:42.260 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=356a895c-7a31-4cd2-8722-2be9b9685c90 00:09:42.260 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 73f9b840-aeea-47d1-a1b2-3e7539ea9e34 30 00:09:42.826 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 356a895c-7a31-4cd2-8722-2be9b9685c90 MY_CLONE 00:09:43.084 13:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=dfe4be7c-e311-4dcd-8301-f6eae419a600 00:09:43.084 13:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate dfe4be7c-e311-4dcd-8301-f6eae419a600 00:09:43.653 13:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63245 00:09:51.839 Initializing NVMe Controllers 00:09:51.839 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:51.839 Controller IO queue size 128, less than required. 00:09:51.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:51.839 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:51.839 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:51.839 Initialization complete. Launching workers. 00:09:51.839 ======================================================== 00:09:51.839 Latency(us) 00:09:51.839 Device Information : IOPS MiB/s Average min max 00:09:51.839 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10268.40 40.11 12468.05 2338.98 72721.21 00:09:51.839 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10310.00 40.27 12423.33 262.45 60506.15 00:09:51.839 ======================================================== 00:09:51.839 Total : 20578.40 80.38 12445.65 262.45 72721.21 00:09:51.839 00:09:51.839 13:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:51.839 13:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 73f9b840-aeea-47d1-a1b2-3e7539ea9e34 00:09:51.839 13:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3629283b-a74b-4113-a6bb-c633c18440df 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.101 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.101 rmmod nvme_tcp 00:09:52.358 rmmod nvme_fabrics 00:09:52.358 rmmod nvme_keyring 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 63158 ']' 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 63158 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 63158 ']' 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 63158 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63158 00:09:52.358 killing process with pid 63158 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63158' 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 63158 00:09:52.358 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 63158 00:09:52.616 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:52.616 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:52.617 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:52.874 ************************************ 00:09:52.874 END TEST nvmf_lvol 00:09:52.874 ************************************ 00:09:52.874 00:09:52.874 real 0m17.259s 00:09:52.874 user 1m9.914s 00:09:52.874 sys 0m4.366s 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.874 ************************************ 00:09:52.874 START TEST nvmf_lvs_grow 00:09:52.874 ************************************ 00:09:52.874 13:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:52.874 * Looking for test storage... 00:09:52.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.874 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:53.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.133 --rc genhtml_branch_coverage=1 00:09:53.133 --rc genhtml_function_coverage=1 00:09:53.133 --rc genhtml_legend=1 00:09:53.133 --rc geninfo_all_blocks=1 00:09:53.133 --rc geninfo_unexecuted_blocks=1 00:09:53.133 00:09:53.133 ' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:53.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.133 --rc genhtml_branch_coverage=1 00:09:53.133 --rc genhtml_function_coverage=1 00:09:53.133 --rc genhtml_legend=1 00:09:53.133 --rc geninfo_all_blocks=1 00:09:53.133 --rc geninfo_unexecuted_blocks=1 00:09:53.133 00:09:53.133 ' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:53.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.133 --rc genhtml_branch_coverage=1 00:09:53.133 --rc genhtml_function_coverage=1 00:09:53.133 --rc genhtml_legend=1 00:09:53.133 --rc geninfo_all_blocks=1 00:09:53.133 --rc geninfo_unexecuted_blocks=1 00:09:53.133 00:09:53.133 ' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:53.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.133 --rc genhtml_branch_coverage=1 00:09:53.133 --rc genhtml_function_coverage=1 00:09:53.133 --rc genhtml_legend=1 00:09:53.133 --rc geninfo_all_blocks=1 00:09:53.133 --rc geninfo_unexecuted_blocks=1 00:09:53.133 00:09:53.133 ' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.133 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.134 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:53.134 Cannot find device "nvmf_init_br" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:53.134 Cannot find device "nvmf_init_br2" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:53.134 Cannot find device "nvmf_tgt_br" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.134 Cannot find device "nvmf_tgt_br2" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:53.134 Cannot find device "nvmf_init_br" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:53.134 Cannot find device "nvmf_init_br2" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:53.134 Cannot find device "nvmf_tgt_br" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:53.134 Cannot find device "nvmf_tgt_br2" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:53.134 Cannot find device "nvmf_br" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:53.134 Cannot find device "nvmf_init_if" 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:53.134 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:53.392 Cannot find device "nvmf_init_if2" 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:53.392 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:53.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:09:53.393 00:09:53.393 --- 10.0.0.3 ping statistics --- 00:09:53.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.393 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:53.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:53.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:09:53.393 00:09:53.393 --- 10.0.0.4 ping statistics --- 00:09:53.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.393 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:53.393 00:09:53.393 --- 10.0.0.1 ping statistics --- 00:09:53.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.393 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:53.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:53.393 00:09:53.393 --- 10.0.0.2 ping statistics --- 00:09:53.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.393 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:53.393 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=63628 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 63628 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63628 ']' 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.651 13:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:53.651 [2024-10-01 13:44:03.654795] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:53.651 [2024-10-01 13:44:03.655149] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.651 [2024-10-01 13:44:03.795084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.910 [2024-10-01 13:44:03.927392] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.910 [2024-10-01 13:44:03.927460] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.910 [2024-10-01 13:44:03.927475] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.910 [2024-10-01 13:44:03.927486] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.910 [2024-10-01 13:44:03.927495] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.910 [2024-10-01 13:44:03.927529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.910 [2024-10-01 13:44:03.987729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.476 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.476 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:54.476 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:54.476 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.476 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:54.736 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.736 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:54.995 [2024-10-01 13:44:04.966701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:54.995 ************************************ 00:09:54.995 START TEST lvs_grow_clean 00:09:54.995 ************************************ 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:54.995 13:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:54.995 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:54.995 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:55.280 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:55.280 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:55.539 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:09:55.539 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:09:55.539 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:55.797 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:55.797 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:55.797 13:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 lvol 150 00:09:56.055 13:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fd416100-c601-4275-a7d3-6a3c6e21362b 00:09:56.055 13:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:56.055 13:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:56.314 [2024-10-01 13:44:06.475639] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:56.314 [2024-10-01 13:44:06.475749] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:56.314 true 00:09:56.573 13:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:56.573 13:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:09:56.831 13:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:56.831 13:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:57.090 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd416100-c601-4275-a7d3-6a3c6e21362b 00:09:57.347 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:57.606 [2024-10-01 13:44:07.580221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:57.606 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63716 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:57.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63716 /var/tmp/bdevperf.sock 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63716 ']' 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.865 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:57.865 [2024-10-01 13:44:07.983631] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:57.865 [2024-10-01 13:44:07.984028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63716 ] 00:09:58.123 [2024-10-01 13:44:08.124542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.123 [2024-10-01 13:44:08.245521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.124 [2024-10-01 13:44:08.299683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.062 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.062 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:59.062 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:59.320 Nvme0n1 00:09:59.320 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:59.579 [ 00:09:59.579 { 00:09:59.579 "name": "Nvme0n1", 00:09:59.579 "aliases": [ 00:09:59.579 "fd416100-c601-4275-a7d3-6a3c6e21362b" 00:09:59.579 ], 00:09:59.579 "product_name": "NVMe disk", 00:09:59.579 "block_size": 4096, 00:09:59.579 "num_blocks": 38912, 00:09:59.579 "uuid": "fd416100-c601-4275-a7d3-6a3c6e21362b", 00:09:59.579 "numa_id": -1, 00:09:59.579 "assigned_rate_limits": { 00:09:59.579 "rw_ios_per_sec": 0, 00:09:59.579 "rw_mbytes_per_sec": 0, 00:09:59.579 "r_mbytes_per_sec": 0, 00:09:59.579 "w_mbytes_per_sec": 0 00:09:59.579 }, 00:09:59.579 "claimed": false, 00:09:59.579 "zoned": false, 00:09:59.579 "supported_io_types": { 00:09:59.579 "read": true, 00:09:59.579 "write": true, 00:09:59.579 "unmap": true, 00:09:59.579 "flush": true, 00:09:59.579 "reset": true, 00:09:59.579 "nvme_admin": true, 00:09:59.579 "nvme_io": true, 00:09:59.579 "nvme_io_md": false, 00:09:59.579 "write_zeroes": true, 00:09:59.579 "zcopy": false, 00:09:59.579 "get_zone_info": false, 00:09:59.579 "zone_management": false, 00:09:59.579 "zone_append": false, 00:09:59.579 "compare": true, 00:09:59.579 "compare_and_write": true, 00:09:59.579 "abort": true, 00:09:59.579 "seek_hole": false, 00:09:59.579 "seek_data": false, 00:09:59.579 "copy": true, 00:09:59.579 "nvme_iov_md": false 00:09:59.579 }, 00:09:59.579 "memory_domains": [ 00:09:59.579 { 00:09:59.579 "dma_device_id": "system", 00:09:59.579 "dma_device_type": 1 00:09:59.579 } 00:09:59.579 ], 00:09:59.579 "driver_specific": { 00:09:59.579 "nvme": [ 00:09:59.579 { 00:09:59.579 "trid": { 00:09:59.579 "trtype": "TCP", 00:09:59.579 "adrfam": "IPv4", 00:09:59.579 "traddr": "10.0.0.3", 00:09:59.579 "trsvcid": "4420", 00:09:59.579 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:59.579 }, 00:09:59.579 "ctrlr_data": { 00:09:59.579 "cntlid": 1, 00:09:59.579 "vendor_id": "0x8086", 00:09:59.579 "model_number": "SPDK bdev Controller", 00:09:59.579 "serial_number": "SPDK0", 00:09:59.579 "firmware_revision": "25.01", 00:09:59.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:59.579 "oacs": { 00:09:59.579 "security": 0, 00:09:59.579 "format": 0, 00:09:59.579 "firmware": 0, 00:09:59.579 "ns_manage": 0 00:09:59.579 }, 00:09:59.579 "multi_ctrlr": true, 00:09:59.579 "ana_reporting": false 00:09:59.579 }, 00:09:59.579 "vs": { 00:09:59.579 "nvme_version": "1.3" 00:09:59.579 }, 00:09:59.579 "ns_data": { 00:09:59.579 "id": 1, 00:09:59.579 "can_share": true 00:09:59.579 } 00:09:59.579 } 00:09:59.579 ], 00:09:59.579 "mp_policy": "active_passive" 00:09:59.579 } 00:09:59.579 } 00:09:59.579 ] 00:09:59.579 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:59.579 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63745 00:09:59.579 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:59.839 Running I/O for 10 seconds... 00:10:00.774 Latency(us) 00:10:00.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.774 Nvme0n1 : 1.00 6200.00 24.22 0.00 0.00 0.00 0.00 0.00 00:10:00.774 =================================================================================================================== 00:10:00.774 Total : 6200.00 24.22 0.00 0.00 0.00 0.00 0.00 00:10:00.774 00:10:01.710 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:01.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.710 Nvme0n1 : 2.00 6402.00 25.01 0.00 0.00 0.00 0.00 0.00 00:10:01.710 =================================================================================================================== 00:10:01.710 Total : 6402.00 25.01 0.00 0.00 0.00 0.00 0.00 00:10:01.710 00:10:02.003 true 00:10:02.003 13:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:02.003 13:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:02.569 13:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:02.569 13:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:02.569 13:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63745 00:10:02.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.828 Nvme0n1 : 3.00 6257.67 24.44 0.00 0.00 0.00 0.00 0.00 00:10:02.828 =================================================================================================================== 00:10:02.828 Total : 6257.67 24.44 0.00 0.00 0.00 0.00 0.00 00:10:02.828 00:10:03.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.765 Nvme0n1 : 4.00 6407.75 25.03 0.00 0.00 0.00 0.00 0.00 00:10:03.765 =================================================================================================================== 00:10:03.765 Total : 6407.75 25.03 0.00 0.00 0.00 0.00 0.00 00:10:03.765 00:10:04.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.700 Nvme0n1 : 5.00 6370.80 24.89 0.00 0.00 0.00 0.00 0.00 00:10:04.700 =================================================================================================================== 00:10:04.700 Total : 6370.80 24.89 0.00 0.00 0.00 0.00 0.00 00:10:04.700 00:10:06.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.075 Nvme0n1 : 6.00 6388.50 24.96 0.00 0.00 0.00 0.00 0.00 00:10:06.075 =================================================================================================================== 00:10:06.075 Total : 6388.50 24.96 0.00 0.00 0.00 0.00 0.00 00:10:06.075 00:10:07.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.010 Nvme0n1 : 7.00 6401.14 25.00 0.00 0.00 0.00 0.00 0.00 00:10:07.010 =================================================================================================================== 00:10:07.010 Total : 6401.14 25.00 0.00 0.00 0.00 0.00 0.00 00:10:07.010 00:10:07.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.945 Nvme0n1 : 8.00 6426.50 25.10 0.00 0.00 0.00 0.00 0.00 00:10:07.945 =================================================================================================================== 00:10:07.945 Total : 6426.50 25.10 0.00 0.00 0.00 0.00 0.00 00:10:07.945 00:10:08.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.879 Nvme0n1 : 9.00 6389.78 24.96 0.00 0.00 0.00 0.00 0.00 00:10:08.879 =================================================================================================================== 00:10:08.879 Total : 6389.78 24.96 0.00 0.00 0.00 0.00 0.00 00:10:08.879 00:10:09.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.843 Nvme0n1 : 10.00 6398.50 24.99 0.00 0.00 0.00 0.00 0.00 00:10:09.843 =================================================================================================================== 00:10:09.843 Total : 6398.50 24.99 0.00 0.00 0.00 0.00 0.00 00:10:09.843 00:10:09.843 00:10:09.843 Latency(us) 00:10:09.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.843 Nvme0n1 : 10.00 6409.02 25.04 0.00 0.00 19966.22 15371.17 98661.47 00:10:09.843 =================================================================================================================== 00:10:09.843 Total : 6409.02 25.04 0.00 0.00 19966.22 15371.17 98661.47 00:10:09.843 { 00:10:09.843 "results": [ 00:10:09.843 { 00:10:09.843 "job": "Nvme0n1", 00:10:09.843 "core_mask": "0x2", 00:10:09.843 "workload": "randwrite", 00:10:09.843 "status": "finished", 00:10:09.843 "queue_depth": 128, 00:10:09.843 "io_size": 4096, 00:10:09.843 "runtime": 10.003553, 00:10:09.843 "iops": 6409.022874172806, 00:10:09.843 "mibps": 25.035245602237524, 00:10:09.843 "io_failed": 0, 00:10:09.843 "io_timeout": 0, 00:10:09.843 "avg_latency_us": 19966.219841274567, 00:10:09.843 "min_latency_us": 15371.17090909091, 00:10:09.843 "max_latency_us": 98661.46909090909 00:10:09.843 } 00:10:09.843 ], 00:10:09.843 "core_count": 1 00:10:09.843 } 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63716 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63716 ']' 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63716 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63716 00:10:09.844 killing process with pid 63716 00:10:09.844 Received shutdown signal, test time was about 10.000000 seconds 00:10:09.844 00:10:09.844 Latency(us) 00:10:09.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.844 =================================================================================================================== 00:10:09.844 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63716' 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63716 00:10:09.844 13:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63716 00:10:10.102 13:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:10.359 13:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.622 13:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:10.622 13:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:10.880 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:10.880 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:10.880 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:11.447 [2024-10-01 13:44:21.320613] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:11.447 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:11.706 request: 00:10:11.706 { 00:10:11.706 "uuid": "cf9cac5b-c197-4a26-a058-cf9b93f59e57", 00:10:11.706 "method": "bdev_lvol_get_lvstores", 00:10:11.706 "req_id": 1 00:10:11.706 } 00:10:11.706 Got JSON-RPC error response 00:10:11.706 response: 00:10:11.706 { 00:10:11.706 "code": -19, 00:10:11.706 "message": "No such device" 00:10:11.706 } 00:10:11.706 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:11.706 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.706 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.706 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.706 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:11.965 aio_bdev 00:10:11.965 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fd416100-c601-4275-a7d3-6a3c6e21362b 00:10:11.965 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=fd416100-c601-4275-a7d3-6a3c6e21362b 00:10:11.965 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.965 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:11.965 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.965 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.965 13:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:12.224 13:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd416100-c601-4275-a7d3-6a3c6e21362b -t 2000 00:10:12.482 [ 00:10:12.482 { 00:10:12.482 "name": "fd416100-c601-4275-a7d3-6a3c6e21362b", 00:10:12.482 "aliases": [ 00:10:12.482 "lvs/lvol" 00:10:12.482 ], 00:10:12.482 "product_name": "Logical Volume", 00:10:12.482 "block_size": 4096, 00:10:12.482 "num_blocks": 38912, 00:10:12.482 "uuid": "fd416100-c601-4275-a7d3-6a3c6e21362b", 00:10:12.482 "assigned_rate_limits": { 00:10:12.482 "rw_ios_per_sec": 0, 00:10:12.482 "rw_mbytes_per_sec": 0, 00:10:12.483 "r_mbytes_per_sec": 0, 00:10:12.483 "w_mbytes_per_sec": 0 00:10:12.483 }, 00:10:12.483 "claimed": false, 00:10:12.483 "zoned": false, 00:10:12.483 "supported_io_types": { 00:10:12.483 "read": true, 00:10:12.483 "write": true, 00:10:12.483 "unmap": true, 00:10:12.483 "flush": false, 00:10:12.483 "reset": true, 00:10:12.483 "nvme_admin": false, 00:10:12.483 "nvme_io": false, 00:10:12.483 "nvme_io_md": false, 00:10:12.483 "write_zeroes": true, 00:10:12.483 "zcopy": false, 00:10:12.483 "get_zone_info": false, 00:10:12.483 "zone_management": false, 00:10:12.483 "zone_append": false, 00:10:12.483 "compare": false, 00:10:12.483 "compare_and_write": false, 00:10:12.483 "abort": false, 00:10:12.483 "seek_hole": true, 00:10:12.483 "seek_data": true, 00:10:12.483 "copy": false, 00:10:12.483 "nvme_iov_md": false 00:10:12.483 }, 00:10:12.483 "driver_specific": { 00:10:12.483 "lvol": { 00:10:12.483 "lvol_store_uuid": "cf9cac5b-c197-4a26-a058-cf9b93f59e57", 00:10:12.483 "base_bdev": "aio_bdev", 00:10:12.483 "thin_provision": false, 00:10:12.483 "num_allocated_clusters": 38, 00:10:12.483 "snapshot": false, 00:10:12.483 "clone": false, 00:10:12.483 "esnap_clone": false 00:10:12.483 } 00:10:12.483 } 00:10:12.483 } 00:10:12.483 ] 00:10:12.483 13:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:12.483 13:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:12.483 13:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:12.741 13:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:12.741 13:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:12.741 13:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:13.001 13:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:13.001 13:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fd416100-c601-4275-a7d3-6a3c6e21362b 00:10:13.259 13:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf9cac5b-c197-4a26-a058-cf9b93f59e57 00:10:13.518 13:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:14.083 13:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:14.341 ************************************ 00:10:14.341 END TEST lvs_grow_clean 00:10:14.341 ************************************ 00:10:14.341 00:10:14.341 real 0m19.373s 00:10:14.341 user 0m18.479s 00:10:14.341 sys 0m2.664s 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:14.341 ************************************ 00:10:14.341 START TEST lvs_grow_dirty 00:10:14.341 ************************************ 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:14.341 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:14.342 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:14.342 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:14.342 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:14.342 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:14.342 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:14.908 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:14.908 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:15.165 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4a247966-cecf-4586-a940-a3188bf43b8f 00:10:15.165 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:15.166 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:15.423 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:15.423 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:15.423 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a247966-cecf-4586-a940-a3188bf43b8f lvol 150 00:10:15.707 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=08ba076d-9576-4548-8914-8ed854904a09 00:10:15.707 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:15.707 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:15.965 [2024-10-01 13:44:25.967067] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:15.965 [2024-10-01 13:44:25.968137] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:15.965 true 00:10:15.965 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:15.965 13:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:16.247 13:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:16.247 13:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:16.506 13:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08ba076d-9576-4548-8914-8ed854904a09 00:10:16.764 13:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:17.023 [2024-10-01 13:44:27.163901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:17.023 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:17.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64003 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64003 /var/tmp/bdevperf.sock 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 64003 ']' 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.590 13:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:17.590 [2024-10-01 13:44:27.527346] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:17.590 [2024-10-01 13:44:27.527447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64003 ] 00:10:17.590 [2024-10-01 13:44:27.676395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.848 [2024-10-01 13:44:27.819743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.848 [2024-10-01 13:44:27.887989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.412 13:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.412 13:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:18.412 13:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:18.670 Nvme0n1 00:10:18.929 13:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:19.187 [ 00:10:19.187 { 00:10:19.187 "name": "Nvme0n1", 00:10:19.187 "aliases": [ 00:10:19.187 "08ba076d-9576-4548-8914-8ed854904a09" 00:10:19.187 ], 00:10:19.187 "product_name": "NVMe disk", 00:10:19.187 "block_size": 4096, 00:10:19.187 "num_blocks": 38912, 00:10:19.187 "uuid": "08ba076d-9576-4548-8914-8ed854904a09", 00:10:19.187 "numa_id": -1, 00:10:19.187 "assigned_rate_limits": { 00:10:19.187 "rw_ios_per_sec": 0, 00:10:19.187 "rw_mbytes_per_sec": 0, 00:10:19.187 "r_mbytes_per_sec": 0, 00:10:19.187 "w_mbytes_per_sec": 0 00:10:19.187 }, 00:10:19.187 "claimed": false, 00:10:19.187 "zoned": false, 00:10:19.187 "supported_io_types": { 00:10:19.187 "read": true, 00:10:19.187 "write": true, 00:10:19.187 "unmap": true, 00:10:19.187 "flush": true, 00:10:19.187 "reset": true, 00:10:19.187 "nvme_admin": true, 00:10:19.187 "nvme_io": true, 00:10:19.187 "nvme_io_md": false, 00:10:19.187 "write_zeroes": true, 00:10:19.187 "zcopy": false, 00:10:19.187 "get_zone_info": false, 00:10:19.187 "zone_management": false, 00:10:19.187 "zone_append": false, 00:10:19.187 "compare": true, 00:10:19.187 "compare_and_write": true, 00:10:19.187 "abort": true, 00:10:19.187 "seek_hole": false, 00:10:19.187 "seek_data": false, 00:10:19.187 "copy": true, 00:10:19.187 "nvme_iov_md": false 00:10:19.187 }, 00:10:19.187 "memory_domains": [ 00:10:19.187 { 00:10:19.187 "dma_device_id": "system", 00:10:19.187 "dma_device_type": 1 00:10:19.187 } 00:10:19.187 ], 00:10:19.187 "driver_specific": { 00:10:19.187 "nvme": [ 00:10:19.187 { 00:10:19.187 "trid": { 00:10:19.187 "trtype": "TCP", 00:10:19.187 "adrfam": "IPv4", 00:10:19.187 "traddr": "10.0.0.3", 00:10:19.187 "trsvcid": "4420", 00:10:19.187 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:19.187 }, 00:10:19.187 "ctrlr_data": { 00:10:19.187 "cntlid": 1, 00:10:19.187 "vendor_id": "0x8086", 00:10:19.187 "model_number": "SPDK bdev Controller", 00:10:19.187 "serial_number": "SPDK0", 00:10:19.187 "firmware_revision": "25.01", 00:10:19.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:19.187 "oacs": { 00:10:19.187 "security": 0, 00:10:19.187 "format": 0, 00:10:19.187 "firmware": 0, 00:10:19.187 "ns_manage": 0 00:10:19.187 }, 00:10:19.187 "multi_ctrlr": true, 00:10:19.187 "ana_reporting": false 00:10:19.187 }, 00:10:19.187 "vs": { 00:10:19.187 "nvme_version": "1.3" 00:10:19.187 }, 00:10:19.187 "ns_data": { 00:10:19.187 "id": 1, 00:10:19.187 "can_share": true 00:10:19.187 } 00:10:19.187 } 00:10:19.187 ], 00:10:19.187 "mp_policy": "active_passive" 00:10:19.187 } 00:10:19.187 } 00:10:19.187 ] 00:10:19.187 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64027 00:10:19.187 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:19.187 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:19.187 Running I/O for 10 seconds... 00:10:20.120 Latency(us) 00:10:20.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.120 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:20.120 =================================================================================================================== 00:10:20.120 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:20.120 00:10:21.053 13:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:21.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.311 Nvme0n1 : 2.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:10:21.311 =================================================================================================================== 00:10:21.311 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:10:21.311 00:10:21.570 true 00:10:21.570 13:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:21.570 13:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:21.829 13:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:21.829 13:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:21.829 13:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 64027 00:10:22.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.396 Nvme0n1 : 3.00 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:10:22.396 =================================================================================================================== 00:10:22.396 Total : 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:10:22.396 00:10:23.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.328 Nvme0n1 : 4.00 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:10:23.328 =================================================================================================================== 00:10:23.328 Total : 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:10:23.328 00:10:24.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.261 Nvme0n1 : 5.00 6016.00 23.50 0.00 0.00 0.00 0.00 0.00 00:10:24.261 =================================================================================================================== 00:10:24.261 Total : 6016.00 23.50 0.00 0.00 0.00 0.00 0.00 00:10:24.261 00:10:25.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.200 Nvme0n1 : 6.00 6071.67 23.72 0.00 0.00 0.00 0.00 0.00 00:10:25.200 =================================================================================================================== 00:10:25.200 Total : 6071.67 23.72 0.00 0.00 0.00 0.00 0.00 00:10:25.200 00:10:26.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.134 Nvme0n1 : 7.00 6093.29 23.80 0.00 0.00 0.00 0.00 0.00 00:10:26.134 =================================================================================================================== 00:10:26.134 Total : 6093.29 23.80 0.00 0.00 0.00 0.00 0.00 00:10:26.134 00:10:27.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.506 Nvme0n1 : 8.00 6093.62 23.80 0.00 0.00 0.00 0.00 0.00 00:10:27.506 =================================================================================================================== 00:10:27.506 Total : 6093.62 23.80 0.00 0.00 0.00 0.00 0.00 00:10:27.506 00:10:28.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.440 Nvme0n1 : 9.00 6093.89 23.80 0.00 0.00 0.00 0.00 0.00 00:10:28.440 =================================================================================================================== 00:10:28.440 Total : 6093.89 23.80 0.00 0.00 0.00 0.00 0.00 00:10:28.440 00:10:29.374 00:10:29.374 Latency(us) 00:10:29.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.374 Nvme0n1 : 10.00 6105.51 23.85 0.00 0.00 20958.85 6940.86 212574.95 00:10:29.374 =================================================================================================================== 00:10:29.374 Total : 6105.51 23.85 0.00 0.00 20958.85 6940.86 212574.95 00:10:29.374 { 00:10:29.374 "results": [ 00:10:29.374 { 00:10:29.374 "job": "Nvme0n1", 00:10:29.374 "core_mask": "0x2", 00:10:29.374 "workload": "randwrite", 00:10:29.374 "status": "finished", 00:10:29.374 "queue_depth": 128, 00:10:29.374 "io_size": 4096, 00:10:29.374 "runtime": 10.002281, 00:10:29.374 "iops": 6105.5073337771655, 00:10:29.374 "mibps": 23.849638022567053, 00:10:29.374 "io_failed": 0, 00:10:29.374 "io_timeout": 0, 00:10:29.374 "avg_latency_us": 20958.85101460494, 00:10:29.374 "min_latency_us": 6940.858181818182, 00:10:29.374 "max_latency_us": 212574.95272727273 00:10:29.374 } 00:10:29.374 ], 00:10:29.374 "core_count": 1 00:10:29.374 } 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64003 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 64003 ']' 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 64003 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64003 00:10:29.374 killing process with pid 64003 00:10:29.374 Received shutdown signal, test time was about 10.000000 seconds 00:10:29.374 00:10:29.374 Latency(us) 00:10:29.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.374 =================================================================================================================== 00:10:29.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:29.374 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:29.375 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64003' 00:10:29.375 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 64003 00:10:29.375 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 64003 00:10:29.632 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:29.891 13:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:30.148 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:30.148 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63628 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63628 00:10:30.407 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63628 Killed "${NVMF_APP[@]}" "$@" 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:30.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=64160 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 64160 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 64160 ']' 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.407 13:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:30.407 [2024-10-01 13:44:40.512071] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:30.407 [2024-10-01 13:44:40.512364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.665 [2024-10-01 13:44:40.647422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.665 [2024-10-01 13:44:40.798726] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.665 [2024-10-01 13:44:40.799082] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.665 [2024-10-01 13:44:40.799232] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.665 [2024-10-01 13:44:40.799374] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.665 [2024-10-01 13:44:40.799409] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.665 [2024-10-01 13:44:40.799535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.923 [2024-10-01 13:44:40.875577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.491 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.491 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:31.491 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:31.491 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.491 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:31.491 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.491 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:31.750 [2024-10-01 13:44:41.786462] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:31.750 [2024-10-01 13:44:41.786720] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:31.750 [2024-10-01 13:44:41.787483] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 08ba076d-9576-4548-8914-8ed854904a09 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=08ba076d-9576-4548-8914-8ed854904a09 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.750 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:32.008 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08ba076d-9576-4548-8914-8ed854904a09 -t 2000 00:10:32.266 [ 00:10:32.266 { 00:10:32.266 "name": "08ba076d-9576-4548-8914-8ed854904a09", 00:10:32.266 "aliases": [ 00:10:32.266 "lvs/lvol" 00:10:32.266 ], 00:10:32.266 "product_name": "Logical Volume", 00:10:32.266 "block_size": 4096, 00:10:32.266 "num_blocks": 38912, 00:10:32.266 "uuid": "08ba076d-9576-4548-8914-8ed854904a09", 00:10:32.266 "assigned_rate_limits": { 00:10:32.266 "rw_ios_per_sec": 0, 00:10:32.266 "rw_mbytes_per_sec": 0, 00:10:32.266 "r_mbytes_per_sec": 0, 00:10:32.266 "w_mbytes_per_sec": 0 00:10:32.266 }, 00:10:32.266 "claimed": false, 00:10:32.266 "zoned": false, 00:10:32.266 "supported_io_types": { 00:10:32.266 "read": true, 00:10:32.266 "write": true, 00:10:32.266 "unmap": true, 00:10:32.266 "flush": false, 00:10:32.266 "reset": true, 00:10:32.266 "nvme_admin": false, 00:10:32.266 "nvme_io": false, 00:10:32.266 "nvme_io_md": false, 00:10:32.266 "write_zeroes": true, 00:10:32.266 "zcopy": false, 00:10:32.266 "get_zone_info": false, 00:10:32.266 "zone_management": false, 00:10:32.266 "zone_append": false, 00:10:32.266 "compare": false, 00:10:32.266 "compare_and_write": false, 00:10:32.266 "abort": false, 00:10:32.266 "seek_hole": true, 00:10:32.266 "seek_data": true, 00:10:32.266 "copy": false, 00:10:32.266 "nvme_iov_md": false 00:10:32.266 }, 00:10:32.266 "driver_specific": { 00:10:32.266 "lvol": { 00:10:32.266 "lvol_store_uuid": "4a247966-cecf-4586-a940-a3188bf43b8f", 00:10:32.266 "base_bdev": "aio_bdev", 00:10:32.266 "thin_provision": false, 00:10:32.266 "num_allocated_clusters": 38, 00:10:32.266 "snapshot": false, 00:10:32.266 "clone": false, 00:10:32.266 "esnap_clone": false 00:10:32.266 } 00:10:32.266 } 00:10:32.266 } 00:10:32.266 ] 00:10:32.267 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:32.267 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:32.267 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:32.527 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:32.527 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:32.527 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:33.093 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:33.093 13:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:33.093 [2024-10-01 13:44:43.263626] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:33.352 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:33.609 request: 00:10:33.609 { 00:10:33.609 "uuid": "4a247966-cecf-4586-a940-a3188bf43b8f", 00:10:33.609 "method": "bdev_lvol_get_lvstores", 00:10:33.609 "req_id": 1 00:10:33.609 } 00:10:33.609 Got JSON-RPC error response 00:10:33.609 response: 00:10:33.609 { 00:10:33.609 "code": -19, 00:10:33.609 "message": "No such device" 00:10:33.609 } 00:10:33.609 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:33.609 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:33.609 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:33.609 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:33.609 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:33.867 aio_bdev 00:10:33.867 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 08ba076d-9576-4548-8914-8ed854904a09 00:10:33.867 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=08ba076d-9576-4548-8914-8ed854904a09 00:10:33.867 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.867 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:33.867 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.867 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.867 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:34.125 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08ba076d-9576-4548-8914-8ed854904a09 -t 2000 00:10:34.384 [ 00:10:34.384 { 00:10:34.384 "name": "08ba076d-9576-4548-8914-8ed854904a09", 00:10:34.384 "aliases": [ 00:10:34.384 "lvs/lvol" 00:10:34.384 ], 00:10:34.384 "product_name": "Logical Volume", 00:10:34.384 "block_size": 4096, 00:10:34.384 "num_blocks": 38912, 00:10:34.384 "uuid": "08ba076d-9576-4548-8914-8ed854904a09", 00:10:34.384 "assigned_rate_limits": { 00:10:34.384 "rw_ios_per_sec": 0, 00:10:34.384 "rw_mbytes_per_sec": 0, 00:10:34.384 "r_mbytes_per_sec": 0, 00:10:34.384 "w_mbytes_per_sec": 0 00:10:34.384 }, 00:10:34.384 "claimed": false, 00:10:34.384 "zoned": false, 00:10:34.384 "supported_io_types": { 00:10:34.384 "read": true, 00:10:34.384 "write": true, 00:10:34.384 "unmap": true, 00:10:34.384 "flush": false, 00:10:34.384 "reset": true, 00:10:34.384 "nvme_admin": false, 00:10:34.384 "nvme_io": false, 00:10:34.384 "nvme_io_md": false, 00:10:34.384 "write_zeroes": true, 00:10:34.384 "zcopy": false, 00:10:34.384 "get_zone_info": false, 00:10:34.384 "zone_management": false, 00:10:34.384 "zone_append": false, 00:10:34.384 "compare": false, 00:10:34.384 "compare_and_write": false, 00:10:34.384 "abort": false, 00:10:34.384 "seek_hole": true, 00:10:34.384 "seek_data": true, 00:10:34.384 "copy": false, 00:10:34.384 "nvme_iov_md": false 00:10:34.384 }, 00:10:34.384 "driver_specific": { 00:10:34.384 "lvol": { 00:10:34.384 "lvol_store_uuid": "4a247966-cecf-4586-a940-a3188bf43b8f", 00:10:34.384 "base_bdev": "aio_bdev", 00:10:34.384 "thin_provision": false, 00:10:34.384 "num_allocated_clusters": 38, 00:10:34.384 "snapshot": false, 00:10:34.384 "clone": false, 00:10:34.384 "esnap_clone": false 00:10:34.384 } 00:10:34.384 } 00:10:34.384 } 00:10:34.384 ] 00:10:34.384 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:34.384 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:34.384 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:34.642 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:34.642 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:34.642 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:34.900 13:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:34.900 13:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 08ba076d-9576-4548-8914-8ed854904a09 00:10:35.157 13:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a247966-cecf-4586-a940-a3188bf43b8f 00:10:35.723 13:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:35.723 13:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:36.288 ************************************ 00:10:36.288 END TEST lvs_grow_dirty 00:10:36.288 ************************************ 00:10:36.288 00:10:36.288 real 0m21.784s 00:10:36.288 user 0m46.227s 00:10:36.288 sys 0m7.889s 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:36.288 nvmf_trace.0 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.288 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.288 rmmod nvme_tcp 00:10:36.288 rmmod nvme_fabrics 00:10:36.587 rmmod nvme_keyring 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 64160 ']' 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 64160 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 64160 ']' 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 64160 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64160 00:10:36.587 killing process with pid 64160 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64160' 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 64160 00:10:36.587 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 64160 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:36.845 ************************************ 00:10:36.845 END TEST nvmf_lvs_grow 00:10:36.845 ************************************ 00:10:36.845 00:10:36.845 real 0m44.032s 00:10:36.845 user 1m11.714s 00:10:36.845 sys 0m11.447s 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.845 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.104 ************************************ 00:10:37.104 START TEST nvmf_bdev_io_wait 00:10:37.104 ************************************ 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:37.104 * Looking for test storage... 00:10:37.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.104 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.105 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:37.105 Cannot find device "nvmf_init_br" 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:37.105 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:37.363 Cannot find device "nvmf_init_br2" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:37.363 Cannot find device "nvmf_tgt_br" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.363 Cannot find device "nvmf_tgt_br2" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:37.363 Cannot find device "nvmf_init_br" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:37.363 Cannot find device "nvmf_init_br2" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:37.363 Cannot find device "nvmf_tgt_br" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:37.363 Cannot find device "nvmf_tgt_br2" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:37.363 Cannot find device "nvmf_br" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:37.363 Cannot find device "nvmf_init_if" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:37.363 Cannot find device "nvmf_init_if2" 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.363 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:37.622 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.622 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:10:37.622 00:10:37.622 --- 10.0.0.3 ping statistics --- 00:10:37.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.622 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:37.622 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:37.622 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:10:37.622 00:10:37.622 --- 10.0.0.4 ping statistics --- 00:10:37.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.622 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:37.622 00:10:37.622 --- 10.0.0.1 ping statistics --- 00:10:37.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.622 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:37.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:10:37.622 00:10:37.622 --- 10.0.0.2 ping statistics --- 00:10:37.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.622 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=64534 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 64534 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 64534 ']' 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.622 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:37.622 [2024-10-01 13:44:47.746604] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:37.622 [2024-10-01 13:44:47.747164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.880 [2024-10-01 13:44:47.884552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.880 [2024-10-01 13:44:48.018676] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.880 [2024-10-01 13:44:48.018747] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.880 [2024-10-01 13:44:48.018763] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.880 [2024-10-01 13:44:48.018785] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.880 [2024-10-01 13:44:48.018794] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.880 [2024-10-01 13:44:48.018937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.880 [2024-10-01 13:44:48.019257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.880 [2024-10-01 13:44:48.019965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.880 [2024-10-01 13:44:48.019987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.811 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:38.812 [2024-10-01 13:44:48.923335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:38.812 [2024-10-01 13:44:48.942337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.812 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.069 Malloc0 00:10:39.069 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.069 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.069 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.069 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.069 [2024-10-01 13:44:49.017750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64575 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64577 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64579 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:39.069 { 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme$subsystem", 00:10:39.069 "trtype": "$TEST_TRANSPORT", 00:10:39.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "$NVMF_PORT", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.069 "hdgst": ${hdgst:-false}, 00:10:39.069 "ddgst": ${ddgst:-false} 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 } 00:10:39.069 EOF 00:10:39.069 )") 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64581 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:39.069 { 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme$subsystem", 00:10:39.069 "trtype": "$TEST_TRANSPORT", 00:10:39.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "$NVMF_PORT", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.069 "hdgst": ${hdgst:-false}, 00:10:39.069 "ddgst": ${ddgst:-false} 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 } 00:10:39.069 EOF 00:10:39.069 )") 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:39.069 { 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme$subsystem", 00:10:39.069 "trtype": "$TEST_TRANSPORT", 00:10:39.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "$NVMF_PORT", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.069 "hdgst": ${hdgst:-false}, 00:10:39.069 "ddgst": ${ddgst:-false} 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 } 00:10:39.069 EOF 00:10:39.069 )") 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:39.069 { 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme$subsystem", 00:10:39.069 "trtype": "$TEST_TRANSPORT", 00:10:39.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "$NVMF_PORT", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.069 "hdgst": ${hdgst:-false}, 00:10:39.069 "ddgst": ${ddgst:-false} 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 } 00:10:39.069 EOF 00:10:39.069 )") 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme1", 00:10:39.069 "trtype": "tcp", 00:10:39.069 "traddr": "10.0.0.3", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "4420", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.069 "hdgst": false, 00:10:39.069 "ddgst": false 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 }' 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme1", 00:10:39.069 "trtype": "tcp", 00:10:39.069 "traddr": "10.0.0.3", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "4420", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.069 "hdgst": false, 00:10:39.069 "ddgst": false 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 }' 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme1", 00:10:39.069 "trtype": "tcp", 00:10:39.069 "traddr": "10.0.0.3", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "4420", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.069 "hdgst": false, 00:10:39.069 "ddgst": false 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 }' 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:39.069 "params": { 00:10:39.069 "name": "Nvme1", 00:10:39.069 "trtype": "tcp", 00:10:39.069 "traddr": "10.0.0.3", 00:10:39.069 "adrfam": "ipv4", 00:10:39.069 "trsvcid": "4420", 00:10:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.069 "hdgst": false, 00:10:39.069 "ddgst": false 00:10:39.069 }, 00:10:39.069 "method": "bdev_nvme_attach_controller" 00:10:39.069 }' 00:10:39.069 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64575 00:10:39.069 [2024-10-01 13:44:49.102327] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:39.069 [2024-10-01 13:44:49.102421] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:39.069 [2024-10-01 13:44:49.102451] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:39.069 [2024-10-01 13:44:49.102504] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:39.069 [2024-10-01 13:44:49.104040] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:39.069 [2024-10-01 13:44:49.104305] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:39.069 [2024-10-01 13:44:49.110397] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:39.069 [2024-10-01 13:44:49.110634] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:39.327 [2024-10-01 13:44:49.309652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.327 [2024-10-01 13:44:49.374670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.327 [2024-10-01 13:44:49.417574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:39.327 [2024-10-01 13:44:49.456341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.327 [2024-10-01 13:44:49.466401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.584 [2024-10-01 13:44:49.534298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:39.584 [2024-10-01 13:44:49.534870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.584 [2024-10-01 13:44:49.555236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:39.584 Running I/O for 1 seconds... 00:10:39.584 [2024-10-01 13:44:49.602749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.584 [2024-10-01 13:44:49.637795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.584 [2024-10-01 13:44:49.645892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:39.584 [2024-10-01 13:44:49.754241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.841 Running I/O for 1 seconds... 00:10:39.841 Running I/O for 1 seconds... 00:10:39.841 Running I/O for 1 seconds... 00:10:40.775 7395.00 IOPS, 28.89 MiB/s 00:10:40.775 Latency(us) 00:10:40.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.775 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:40.775 Nvme1n1 : 1.01 7443.37 29.08 0.00 0.00 17100.21 9592.09 21686.46 00:10:40.775 =================================================================================================================== 00:10:40.775 Total : 7443.37 29.08 0.00 0.00 17100.21 9592.09 21686.46 00:10:40.775 174608.00 IOPS, 682.06 MiB/s 00:10:40.775 Latency(us) 00:10:40.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.775 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:40.775 Nvme1n1 : 1.00 174199.32 680.47 0.00 0.00 730.71 415.19 2338.44 00:10:40.775 =================================================================================================================== 00:10:40.775 Total : 174199.32 680.47 0.00 0.00 730.71 415.19 2338.44 00:10:40.775 6565.00 IOPS, 25.64 MiB/s 00:10:40.775 Latency(us) 00:10:40.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.775 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:40.775 Nvme1n1 : 1.01 6618.96 25.86 0.00 0.00 19220.88 10187.87 29789.09 00:10:40.775 =================================================================================================================== 00:10:40.775 Total : 6618.96 25.86 0.00 0.00 19220.88 10187.87 29789.09 00:10:40.775 6619.00 IOPS, 25.86 MiB/s 00:10:40.775 Latency(us) 00:10:40.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.775 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:40.775 Nvme1n1 : 1.01 6694.17 26.15 0.00 0.00 19019.78 5451.40 33125.47 00:10:40.775 =================================================================================================================== 00:10:40.775 Total : 6694.17 26.15 0.00 0.00 19019.78 5451.40 33125.47 00:10:41.032 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64577 00:10:41.032 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64579 00:10:41.032 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64581 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:41.033 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.291 rmmod nvme_tcp 00:10:41.291 rmmod nvme_fabrics 00:10:41.291 rmmod nvme_keyring 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 64534 ']' 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 64534 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 64534 ']' 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 64534 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64534 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64534' 00:10:41.291 killing process with pid 64534 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 64534 00:10:41.291 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 64534 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:41.549 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:41.807 00:10:41.807 real 0m4.753s 00:10:41.807 user 0m19.279s 00:10:41.807 sys 0m2.509s 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.807 ************************************ 00:10:41.807 END TEST nvmf_bdev_io_wait 00:10:41.807 ************************************ 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.807 ************************************ 00:10:41.807 START TEST nvmf_queue_depth 00:10:41.807 ************************************ 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:41.807 * Looking for test storage... 00:10:41.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:41.807 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.091 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:42.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.092 --rc genhtml_branch_coverage=1 00:10:42.092 --rc genhtml_function_coverage=1 00:10:42.092 --rc genhtml_legend=1 00:10:42.092 --rc geninfo_all_blocks=1 00:10:42.092 --rc geninfo_unexecuted_blocks=1 00:10:42.092 00:10:42.092 ' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:42.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.092 --rc genhtml_branch_coverage=1 00:10:42.092 --rc genhtml_function_coverage=1 00:10:42.092 --rc genhtml_legend=1 00:10:42.092 --rc geninfo_all_blocks=1 00:10:42.092 --rc geninfo_unexecuted_blocks=1 00:10:42.092 00:10:42.092 ' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:42.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.092 --rc genhtml_branch_coverage=1 00:10:42.092 --rc genhtml_function_coverage=1 00:10:42.092 --rc genhtml_legend=1 00:10:42.092 --rc geninfo_all_blocks=1 00:10:42.092 --rc geninfo_unexecuted_blocks=1 00:10:42.092 00:10:42.092 ' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:42.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.092 --rc genhtml_branch_coverage=1 00:10:42.092 --rc genhtml_function_coverage=1 00:10:42.092 --rc genhtml_legend=1 00:10:42.092 --rc geninfo_all_blocks=1 00:10:42.092 --rc geninfo_unexecuted_blocks=1 00:10:42.092 00:10:42.092 ' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.092 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.092 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:42.093 Cannot find device "nvmf_init_br" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:42.093 Cannot find device "nvmf_init_br2" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:42.093 Cannot find device "nvmf_tgt_br" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.093 Cannot find device "nvmf_tgt_br2" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:42.093 Cannot find device "nvmf_init_br" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:42.093 Cannot find device "nvmf_init_br2" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:42.093 Cannot find device "nvmf_tgt_br" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:42.093 Cannot find device "nvmf_tgt_br2" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:42.093 Cannot find device "nvmf_br" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:42.093 Cannot find device "nvmf_init_if" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:42.093 Cannot find device "nvmf_init_if2" 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.093 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:42.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:42.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:10:42.352 00:10:42.352 --- 10.0.0.3 ping statistics --- 00:10:42.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.352 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:42.352 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:42.352 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:10:42.352 00:10:42.352 --- 10.0.0.4 ping statistics --- 00:10:42.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.352 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:42.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:10:42.352 00:10:42.352 --- 10.0.0.1 ping statistics --- 00:10:42.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.352 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:42.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:42.352 00:10:42.352 --- 10.0.0.2 ping statistics --- 00:10:42.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.352 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=64868 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 64868 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64868 ']' 00:10:42.352 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.353 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.353 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.353 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.353 13:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:42.612 [2024-10-01 13:44:52.589420] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:42.612 [2024-10-01 13:44:52.589594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.612 [2024-10-01 13:44:52.744680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.870 [2024-10-01 13:44:52.873870] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.870 [2024-10-01 13:44:52.873948] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.870 [2024-10-01 13:44:52.873963] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.870 [2024-10-01 13:44:52.873974] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.870 [2024-10-01 13:44:52.873984] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.870 [2024-10-01 13:44:52.874019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.870 [2024-10-01 13:44:52.930834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.805 [2024-10-01 13:44:53.675493] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.805 Malloc0 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.805 [2024-10-01 13:44:53.746771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64900 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64900 /var/tmp/bdevperf.sock 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64900 ']' 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.805 13:44:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.805 [2024-10-01 13:44:53.830280] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:43.805 [2024-10-01 13:44:53.830448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64900 ] 00:10:43.805 [2024-10-01 13:44:53.975584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.064 [2024-10-01 13:44:54.170867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.322 [2024-10-01 13:44:54.248362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.889 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.889 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:44.889 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:44.889 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.889 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:44.889 NVMe0n1 00:10:44.889 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.889 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:44.889 Running I/O for 10 seconds... 00:10:55.170 6164.00 IOPS, 24.08 MiB/s 6673.50 IOPS, 26.07 MiB/s 7067.00 IOPS, 27.61 MiB/s 7186.50 IOPS, 28.07 MiB/s 7282.80 IOPS, 28.45 MiB/s 7348.00 IOPS, 28.70 MiB/s 7440.00 IOPS, 29.06 MiB/s 7536.00 IOPS, 29.44 MiB/s 7634.89 IOPS, 29.82 MiB/s 7741.60 IOPS, 30.24 MiB/s 00:10:55.170 Latency(us) 00:10:55.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.170 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:55.170 Verification LBA range: start 0x0 length 0x4000 00:10:55.170 NVMe0n1 : 10.07 7775.06 30.37 0.00 0.00 131051.54 13583.83 98661.47 00:10:55.170 =================================================================================================================== 00:10:55.170 Total : 7775.06 30.37 0.00 0.00 131051.54 13583.83 98661.47 00:10:55.170 { 00:10:55.170 "results": [ 00:10:55.170 { 00:10:55.170 "job": "NVMe0n1", 00:10:55.170 "core_mask": "0x1", 00:10:55.170 "workload": "verify", 00:10:55.170 "status": "finished", 00:10:55.170 "verify_range": { 00:10:55.170 "start": 0, 00:10:55.170 "length": 16384 00:10:55.170 }, 00:10:55.170 "queue_depth": 1024, 00:10:55.170 "io_size": 4096, 00:10:55.170 "runtime": 10.068738, 00:10:55.170 "iops": 7775.055821295578, 00:10:55.170 "mibps": 30.371311801935853, 00:10:55.170 "io_failed": 0, 00:10:55.170 "io_timeout": 0, 00:10:55.170 "avg_latency_us": 131051.53709966497, 00:10:55.170 "min_latency_us": 13583.825454545455, 00:10:55.170 "max_latency_us": 98661.46909090909 00:10:55.170 } 00:10:55.170 ], 00:10:55.170 "core_count": 1 00:10:55.170 } 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64900 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64900 ']' 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64900 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64900 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.170 killing process with pid 64900 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64900' 00:10:55.170 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64900 00:10:55.170 Received shutdown signal, test time was about 10.000000 seconds 00:10:55.170 00:10:55.171 Latency(us) 00:10:55.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.171 =================================================================================================================== 00:10:55.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:55.171 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64900 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.429 rmmod nvme_tcp 00:10:55.429 rmmod nvme_fabrics 00:10:55.429 rmmod nvme_keyring 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 64868 ']' 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 64868 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64868 ']' 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64868 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64868 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:55.429 killing process with pid 64868 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64868' 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64868 00:10:55.429 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64868 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:55.687 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.945 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:55.945 00:10:55.945 real 0m14.193s 00:10:55.945 user 0m23.892s 00:10:55.945 sys 0m2.589s 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.945 ************************************ 00:10:55.945 END TEST nvmf_queue_depth 00:10:55.945 ************************************ 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.945 ************************************ 00:10:55.945 START TEST nvmf_target_multipath 00:10:55.945 ************************************ 00:10:55.945 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:56.204 * Looking for test storage... 00:10:56.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:56.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.204 --rc genhtml_branch_coverage=1 00:10:56.204 --rc genhtml_function_coverage=1 00:10:56.204 --rc genhtml_legend=1 00:10:56.204 --rc geninfo_all_blocks=1 00:10:56.204 --rc geninfo_unexecuted_blocks=1 00:10:56.204 00:10:56.204 ' 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:56.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.204 --rc genhtml_branch_coverage=1 00:10:56.204 --rc genhtml_function_coverage=1 00:10:56.204 --rc genhtml_legend=1 00:10:56.204 --rc geninfo_all_blocks=1 00:10:56.204 --rc geninfo_unexecuted_blocks=1 00:10:56.204 00:10:56.204 ' 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:56.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.204 --rc genhtml_branch_coverage=1 00:10:56.204 --rc genhtml_function_coverage=1 00:10:56.204 --rc genhtml_legend=1 00:10:56.204 --rc geninfo_all_blocks=1 00:10:56.204 --rc geninfo_unexecuted_blocks=1 00:10:56.204 00:10:56.204 ' 00:10:56.204 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:56.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.204 --rc genhtml_branch_coverage=1 00:10:56.204 --rc genhtml_function_coverage=1 00:10:56.204 --rc genhtml_legend=1 00:10:56.204 --rc geninfo_all_blocks=1 00:10:56.204 --rc geninfo_unexecuted_blocks=1 00:10:56.204 00:10:56.204 ' 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.205 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:56.205 Cannot find device "nvmf_init_br" 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:56.205 Cannot find device "nvmf_init_br2" 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:56.205 Cannot find device "nvmf_tgt_br" 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:56.205 Cannot find device "nvmf_tgt_br2" 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:56.205 Cannot find device "nvmf_init_br" 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:56.205 Cannot find device "nvmf_init_br2" 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:56.205 Cannot find device "nvmf_tgt_br" 00:10:56.205 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:56.206 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:56.464 Cannot find device "nvmf_tgt_br2" 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:56.464 Cannot find device "nvmf_br" 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:56.464 Cannot find device "nvmf_init_if" 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:56.464 Cannot find device "nvmf_init_if2" 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:56.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:56.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:56.464 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:56.723 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:56.723 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:10:56.723 00:10:56.723 --- 10.0.0.3 ping statistics --- 00:10:56.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.723 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:56.723 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:56.723 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:10:56.723 00:10:56.723 --- 10.0.0.4 ping statistics --- 00:10:56.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.723 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:56.723 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:56.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:56.723 00:10:56.723 --- 10.0.0.1 ping statistics --- 00:10:56.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.724 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:56.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:10:56.724 00:10:56.724 --- 10.0.0.2 ping statistics --- 00:10:56.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.724 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=65280 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 65280 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 65280 ']' 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.724 13:45:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:56.724 [2024-10-01 13:45:06.852878] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:56.724 [2024-10-01 13:45:06.853046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.982 [2024-10-01 13:45:06.996123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.982 [2024-10-01 13:45:07.153813] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.982 [2024-10-01 13:45:07.154261] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.982 [2024-10-01 13:45:07.154488] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.982 [2024-10-01 13:45:07.154679] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.982 [2024-10-01 13:45:07.154857] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.982 [2024-10-01 13:45:07.155138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.982 [2024-10-01 13:45:07.155262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.982 [2024-10-01 13:45:07.155329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.982 [2024-10-01 13:45:07.155339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.240 [2024-10-01 13:45:07.215252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.806 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.806 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:10:57.806 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:57.806 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.806 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:57.806 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.806 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.064 [2024-10-01 13:45:08.113169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.064 13:45:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:58.630 Malloc0 00:10:58.630 13:45:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:58.888 13:45:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.146 13:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:59.404 [2024-10-01 13:45:09.464657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:59.404 13:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:59.662 [2024-10-01 13:45:09.728961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:59.662 13:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid=88f52f68-80e5-4327-8a21-70d63145da24 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:59.921 13:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid=88f52f68-80e5-4327-8a21-70d63145da24 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:59.921 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.921 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:59.921 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.921 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:59.921 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65375 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:01.898 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:02.156 [global] 00:11:02.156 thread=1 00:11:02.156 invalidate=1 00:11:02.156 rw=randrw 00:11:02.156 time_based=1 00:11:02.156 runtime=6 00:11:02.156 ioengine=libaio 00:11:02.156 direct=1 00:11:02.156 bs=4096 00:11:02.156 iodepth=128 00:11:02.156 norandommap=0 00:11:02.156 numjobs=1 00:11:02.156 00:11:02.156 verify_dump=1 00:11:02.156 verify_backlog=512 00:11:02.156 verify_state_save=0 00:11:02.156 do_verify=1 00:11:02.156 verify=crc32c-intel 00:11:02.156 [job0] 00:11:02.156 filename=/dev/nvme0n1 00:11:02.156 Could not set queue depth (nvme0n1) 00:11:02.156 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.156 fio-3.35 00:11:02.156 Starting 1 thread 00:11:03.098 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:03.356 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:03.614 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:03.872 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:04.130 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:04.131 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65375 00:11:08.348 00:11:08.348 job0: (groupid=0, jobs=1): err= 0: pid=65396: Tue Oct 1 13:45:18 2024 00:11:08.348 read: IOPS=9521, BW=37.2MiB/s (39.0MB/s)(223MiB/6003msec) 00:11:08.348 slat (usec): min=6, max=7950, avg=61.66, stdev=250.16 00:11:08.348 clat (usec): min=1371, max=20235, avg=9149.56, stdev=1669.28 00:11:08.348 lat (usec): min=1866, max=20246, avg=9211.22, stdev=1674.46 00:11:08.348 clat percentiles (usec): 00:11:08.348 | 1.00th=[ 4817], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 8291], 00:11:08.348 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:11:08.348 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[13173], 00:11:08.348 | 99.00th=[14484], 99.50th=[14746], 99.90th=[16581], 99.95th=[18482], 00:11:08.348 | 99.99th=[20317] 00:11:08.348 bw ( KiB/s): min=14096, max=25032, per=51.41%, avg=19579.55, stdev=3535.89, samples=11 00:11:08.348 iops : min= 3524, max= 6258, avg=4894.82, stdev=883.96, samples=11 00:11:08.348 write: IOPS=5564, BW=21.7MiB/s (22.8MB/s)(117MiB/5378msec); 0 zone resets 00:11:08.348 slat (usec): min=14, max=1903, avg=71.15, stdev=178.28 00:11:08.348 clat (usec): min=1774, max=17063, avg=7967.13, stdev=1473.35 00:11:08.348 lat (usec): min=1798, max=17583, avg=8038.29, stdev=1478.78 00:11:08.348 clat percentiles (usec): 00:11:08.348 | 1.00th=[ 3621], 5.00th=[ 4686], 10.00th=[ 6063], 20.00th=[ 7373], 00:11:08.348 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8356], 00:11:08.348 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9634], 00:11:08.348 | 99.00th=[12518], 99.50th=[13173], 99.90th=[15008], 99.95th=[15533], 00:11:08.348 | 99.99th=[16581] 00:11:08.348 bw ( KiB/s): min=14328, max=24576, per=88.19%, avg=19631.82, stdev=3166.99, samples=11 00:11:08.348 iops : min= 3582, max= 6144, avg=4907.91, stdev=791.73, samples=11 00:11:08.348 lat (msec) : 2=0.03%, 4=0.88%, 10=87.79%, 20=11.29%, 50=0.01% 00:11:08.348 cpu : usr=5.41%, sys=20.43%, ctx=5150, majf=0, minf=72 00:11:08.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:08.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.348 issued rwts: total=57158,29927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.348 00:11:08.348 Run status group 0 (all jobs): 00:11:08.348 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=223MiB (234MB), run=6003-6003msec 00:11:08.348 WRITE: bw=21.7MiB/s (22.8MB/s), 21.7MiB/s-21.7MiB/s (22.8MB/s-22.8MB/s), io=117MiB (123MB), run=5378-5378msec 00:11:08.348 00:11:08.348 Disk stats (read/write): 00:11:08.348 nvme0n1: ios=56402/29345, merge=0/0, ticks=495351/219691, in_queue=715042, util=98.56% 00:11:08.348 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:08.606 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65480 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:08.864 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:08.864 [global] 00:11:08.864 thread=1 00:11:08.864 invalidate=1 00:11:08.864 rw=randrw 00:11:08.864 time_based=1 00:11:08.864 runtime=6 00:11:08.864 ioengine=libaio 00:11:08.864 direct=1 00:11:08.864 bs=4096 00:11:08.864 iodepth=128 00:11:08.864 norandommap=0 00:11:08.864 numjobs=1 00:11:08.864 00:11:08.864 verify_dump=1 00:11:08.864 verify_backlog=512 00:11:08.864 verify_state_save=0 00:11:08.864 do_verify=1 00:11:08.864 verify=crc32c-intel 00:11:08.864 [job0] 00:11:08.864 filename=/dev/nvme0n1 00:11:08.864 Could not set queue depth (nvme0n1) 00:11:09.122 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.122 fio-3.35 00:11:09.122 Starting 1 thread 00:11:10.056 13:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:10.314 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:10.611 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:10.875 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:11.135 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65480 00:11:15.326 00:11:15.326 job0: (groupid=0, jobs=1): err= 0: pid=65501: Tue Oct 1 13:45:25 2024 00:11:15.326 read: IOPS=9733, BW=38.0MiB/s (39.9MB/s)(228MiB/6002msec) 00:11:15.326 slat (usec): min=2, max=8630, avg=51.02, stdev=225.94 00:11:15.327 clat (usec): min=451, max=20658, avg=9063.59, stdev=2199.54 00:11:15.327 lat (usec): min=462, max=20675, avg=9114.60, stdev=2208.86 00:11:15.327 clat percentiles (usec): 00:11:15.327 | 1.00th=[ 3884], 5.00th=[ 5211], 10.00th=[ 6325], 20.00th=[ 7832], 00:11:15.327 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:11:15.327 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11600], 95.00th=[13435], 00:11:15.327 | 99.00th=[15008], 99.50th=[16057], 99.90th=[18220], 99.95th=[19006], 00:11:15.327 | 99.99th=[20579] 00:11:15.327 bw ( KiB/s): min= 5120, max=27920, per=52.12%, avg=20290.91, stdev=7263.90, samples=11 00:11:15.327 iops : min= 1280, max= 6980, avg=5072.73, stdev=1815.97, samples=11 00:11:15.327 write: IOPS=5828, BW=22.8MiB/s (23.9MB/s)(121MiB/5311msec); 0 zone resets 00:11:15.327 slat (usec): min=3, max=5147, avg=59.83, stdev=152.58 00:11:15.327 clat (usec): min=854, max=17554, avg=7508.30, stdev=1961.92 00:11:15.327 lat (usec): min=939, max=17578, avg=7568.12, stdev=1972.74 00:11:15.327 clat percentiles (usec): 00:11:15.327 | 1.00th=[ 2868], 5.00th=[ 3949], 10.00th=[ 4555], 20.00th=[ 5735], 00:11:15.327 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8225], 00:11:15.327 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:11:15.327 | 99.00th=[13042], 99.50th=[13698], 99.90th=[15008], 99.95th=[15401], 00:11:15.327 | 99.99th=[17433] 00:11:15.327 bw ( KiB/s): min= 5608, max=28918, per=87.30%, avg=20352.55, stdev=7086.19, samples=11 00:11:15.327 iops : min= 1402, max= 7229, avg=5088.09, stdev=1771.49, samples=11 00:11:15.327 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:11:15.327 lat (msec) : 2=0.20%, 4=2.44%, 10=79.62%, 20=17.69%, 50=0.02% 00:11:15.327 cpu : usr=5.80%, sys=21.66%, ctx=5161, majf=0, minf=72 00:11:15.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:15.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.327 issued rwts: total=58419,30953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.327 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.327 00:11:15.327 Run status group 0 (all jobs): 00:11:15.327 READ: bw=38.0MiB/s (39.9MB/s), 38.0MiB/s-38.0MiB/s (39.9MB/s-39.9MB/s), io=228MiB (239MB), run=6002-6002msec 00:11:15.327 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=121MiB (127MB), run=5311-5311msec 00:11:15.327 00:11:15.327 Disk stats (read/write): 00:11:15.327 nvme0n1: ios=57843/30223, merge=0/0, ticks=503396/212263, in_queue=715659, util=98.75% 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:15.327 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.585 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:15.585 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:15.585 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:15.585 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:15.585 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:15.585 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.842 rmmod nvme_tcp 00:11:15.842 rmmod nvme_fabrics 00:11:15.842 rmmod nvme_keyring 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 65280 ']' 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 65280 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 65280 ']' 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 65280 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65280 00:11:15.842 killing process with pid 65280 00:11:15.842 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.843 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.843 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65280' 00:11:15.843 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 65280 00:11:15.843 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 65280 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:16.410 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:16.411 ************************************ 00:11:16.411 END TEST nvmf_target_multipath 00:11:16.411 ************************************ 00:11:16.411 00:11:16.411 real 0m20.448s 00:11:16.411 user 1m17.299s 00:11:16.411 sys 0m8.009s 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.411 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.671 ************************************ 00:11:16.671 START TEST nvmf_zcopy 00:11:16.671 ************************************ 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:16.671 * Looking for test storage... 00:11:16.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:16.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.671 --rc genhtml_branch_coverage=1 00:11:16.671 --rc genhtml_function_coverage=1 00:11:16.671 --rc genhtml_legend=1 00:11:16.671 --rc geninfo_all_blocks=1 00:11:16.671 --rc geninfo_unexecuted_blocks=1 00:11:16.671 00:11:16.671 ' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:16.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.671 --rc genhtml_branch_coverage=1 00:11:16.671 --rc genhtml_function_coverage=1 00:11:16.671 --rc genhtml_legend=1 00:11:16.671 --rc geninfo_all_blocks=1 00:11:16.671 --rc geninfo_unexecuted_blocks=1 00:11:16.671 00:11:16.671 ' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:16.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.671 --rc genhtml_branch_coverage=1 00:11:16.671 --rc genhtml_function_coverage=1 00:11:16.671 --rc genhtml_legend=1 00:11:16.671 --rc geninfo_all_blocks=1 00:11:16.671 --rc geninfo_unexecuted_blocks=1 00:11:16.671 00:11:16.671 ' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:16.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.671 --rc genhtml_branch_coverage=1 00:11:16.671 --rc genhtml_function_coverage=1 00:11:16.671 --rc genhtml_legend=1 00:11:16.671 --rc geninfo_all_blocks=1 00:11:16.671 --rc geninfo_unexecuted_blocks=1 00:11:16.671 00:11:16.671 ' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.671 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:16.672 Cannot find device "nvmf_init_br" 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:16.672 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:16.931 Cannot find device "nvmf_init_br2" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:16.931 Cannot find device "nvmf_tgt_br" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.931 Cannot find device "nvmf_tgt_br2" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:16.931 Cannot find device "nvmf_init_br" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:16.931 Cannot find device "nvmf_init_br2" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:16.931 Cannot find device "nvmf_tgt_br" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:16.931 Cannot find device "nvmf_tgt_br2" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:16.931 Cannot find device "nvmf_br" 00:11:16.931 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:16.932 Cannot find device "nvmf_init_if" 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:16.932 Cannot find device "nvmf_init_if2" 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:16.932 13:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:16.932 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:17.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:17.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:11:17.191 00:11:17.191 --- 10.0.0.3 ping statistics --- 00:11:17.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.191 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:17.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:17.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:11:17.191 00:11:17.191 --- 10.0.0.4 ping statistics --- 00:11:17.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.191 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:17.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:17.191 00:11:17.191 --- 10.0.0.1 ping statistics --- 00:11:17.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.191 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:17.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:11:17.191 00:11:17.191 --- 10.0.0.2 ping statistics --- 00:11:17.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.191 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=65809 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 65809 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65809 ']' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:17.191 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:17.191 [2024-10-01 13:45:27.356512] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:17.192 [2024-10-01 13:45:27.356945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.450 [2024-10-01 13:45:27.499975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.450 [2024-10-01 13:45:27.624111] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.450 [2024-10-01 13:45:27.624189] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.450 [2024-10-01 13:45:27.624201] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.450 [2024-10-01 13:45:27.624210] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.450 [2024-10-01 13:45:27.624217] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.450 [2024-10-01 13:45:27.624271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.709 [2024-10-01 13:45:27.680850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.278 [2024-10-01 13:45:28.420736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.278 [2024-10-01 13:45:28.436863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.278 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.537 malloc0 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:18.537 { 00:11:18.537 "params": { 00:11:18.537 "name": "Nvme$subsystem", 00:11:18.537 "trtype": "$TEST_TRANSPORT", 00:11:18.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.537 "adrfam": "ipv4", 00:11:18.537 "trsvcid": "$NVMF_PORT", 00:11:18.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.537 "hdgst": ${hdgst:-false}, 00:11:18.537 "ddgst": ${ddgst:-false} 00:11:18.537 }, 00:11:18.537 "method": "bdev_nvme_attach_controller" 00:11:18.537 } 00:11:18.537 EOF 00:11:18.537 )") 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:11:18.537 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:18.537 "params": { 00:11:18.537 "name": "Nvme1", 00:11:18.537 "trtype": "tcp", 00:11:18.537 "traddr": "10.0.0.3", 00:11:18.537 "adrfam": "ipv4", 00:11:18.537 "trsvcid": "4420", 00:11:18.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.537 "hdgst": false, 00:11:18.537 "ddgst": false 00:11:18.537 }, 00:11:18.537 "method": "bdev_nvme_attach_controller" 00:11:18.537 }' 00:11:18.537 [2024-10-01 13:45:28.561729] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:18.537 [2024-10-01 13:45:28.561847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65842 ] 00:11:18.537 [2024-10-01 13:45:28.701281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.796 [2024-10-01 13:45:28.863998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.796 [2024-10-01 13:45:28.953959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.054 Running I/O for 10 seconds... 00:11:28.989 5462.00 IOPS, 42.67 MiB/s 5503.50 IOPS, 43.00 MiB/s 5545.33 IOPS, 43.32 MiB/s 5580.25 IOPS, 43.60 MiB/s 5606.80 IOPS, 43.80 MiB/s 5630.50 IOPS, 43.99 MiB/s 5649.57 IOPS, 44.14 MiB/s 5659.88 IOPS, 44.22 MiB/s 5669.33 IOPS, 44.29 MiB/s 5689.50 IOPS, 44.45 MiB/s 00:11:28.989 Latency(us) 00:11:28.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.989 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:28.989 Verification LBA range: start 0x0 length 0x1000 00:11:28.989 Nvme1n1 : 10.02 5692.79 44.47 0.00 0.00 22414.85 3336.38 34078.72 00:11:28.989 =================================================================================================================== 00:11:28.989 Total : 5692.79 44.47 0.00 0.00 22414.85 3336.38 34078.72 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65965 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:29.249 { 00:11:29.249 "params": { 00:11:29.249 "name": "Nvme$subsystem", 00:11:29.249 "trtype": "$TEST_TRANSPORT", 00:11:29.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.249 "adrfam": "ipv4", 00:11:29.249 "trsvcid": "$NVMF_PORT", 00:11:29.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.249 "hdgst": ${hdgst:-false}, 00:11:29.249 "ddgst": ${ddgst:-false} 00:11:29.249 }, 00:11:29.249 "method": "bdev_nvme_attach_controller" 00:11:29.249 } 00:11:29.249 EOF 00:11:29.249 )") 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:11:29.249 [2024-10-01 13:45:39.345106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.249 [2024-10-01 13:45:39.345159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:11:29.249 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:29.249 "params": { 00:11:29.249 "name": "Nvme1", 00:11:29.249 "trtype": "tcp", 00:11:29.249 "traddr": "10.0.0.3", 00:11:29.249 "adrfam": "ipv4", 00:11:29.249 "trsvcid": "4420", 00:11:29.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.249 "hdgst": false, 00:11:29.249 "ddgst": false 00:11:29.249 }, 00:11:29.249 "method": "bdev_nvme_attach_controller" 00:11:29.249 }' 00:11:29.249 [2024-10-01 13:45:39.357080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.249 [2024-10-01 13:45:39.357120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.249 [2024-10-01 13:45:39.369071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.249 [2024-10-01 13:45:39.369110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.249 [2024-10-01 13:45:39.381082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.250 [2024-10-01 13:45:39.381121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.250 [2024-10-01 13:45:39.381942] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:29.250 [2024-10-01 13:45:39.382017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65965 ] 00:11:29.250 [2024-10-01 13:45:39.393080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.250 [2024-10-01 13:45:39.393257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.250 [2024-10-01 13:45:39.405090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.250 [2024-10-01 13:45:39.405269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.250 [2024-10-01 13:45:39.413092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.250 [2024-10-01 13:45:39.413260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.250 [2024-10-01 13:45:39.421106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.250 [2024-10-01 13:45:39.421264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.433114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.433273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.445106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.445264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.457098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.457263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.469105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.469258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.481114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.481284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.493116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.493274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.505121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.505280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.514538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.508 [2024-10-01 13:45:39.517116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.517273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.529152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.529365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.541136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.541305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.553139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.553304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.565142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.565306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.577186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.577471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.589157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.589368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.601151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.601316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.613146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.613304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.621151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.508 [2024-10-01 13:45:39.621305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.508 [2024-10-01 13:45:39.626788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.509 [2024-10-01 13:45:39.629152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.509 [2024-10-01 13:45:39.629304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.509 [2024-10-01 13:45:39.637159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.509 [2024-10-01 13:45:39.637313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.509 [2024-10-01 13:45:39.645166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.509 [2024-10-01 13:45:39.645329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.509 [2024-10-01 13:45:39.653171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.509 [2024-10-01 13:45:39.653213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.509 [2024-10-01 13:45:39.661188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.509 [2024-10-01 13:45:39.661236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.509 [2024-10-01 13:45:39.673192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.509 [2024-10-01 13:45:39.673243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.509 [2024-10-01 13:45:39.685206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.509 [2024-10-01 13:45:39.685258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.688040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:29.767 [2024-10-01 13:45:39.697202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.697248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.709208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.709254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.721193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.721234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.733206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.733249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.745220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.745261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.757248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.757288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.769230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.769272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.781251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.781294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.793259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.793302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 Running I/O for 5 seconds... 00:11:29.767 [2024-10-01 13:45:39.805259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.805300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.823639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.823806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.838135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.838302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.852869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.852926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.868434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.868478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.884497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.884540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.903114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.903156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.916849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.916893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.767 [2024-10-01 13:45:39.931693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.767 [2024-10-01 13:45:39.931737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:39.947126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:39.947169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:39.957251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:39.957416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:39.972493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:39.972653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:39.987803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:39.987984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.003099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.003256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.019860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.019904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.036435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.036479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.055412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.055456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.069431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.069475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.084421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.084464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.099085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.099129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.115046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.115087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.133815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.133859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.147978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.148021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.158507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.158678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.170536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.170589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.184946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.184986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.026 [2024-10-01 13:45:40.200289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.026 [2024-10-01 13:45:40.200458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.211091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.211132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.223514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.223556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.237795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.237840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.253356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.253398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.268806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.268849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.285521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.285566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.302282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.302472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.319746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.319810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.334862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.334927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.350460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.350526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.367629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.367694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.383048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.383110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.399121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.399182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.415519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.415586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.432113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.432178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.285 [2024-10-01 13:45:40.449074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.285 [2024-10-01 13:45:40.449133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.464157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.464358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.479639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.479957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.490682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.490982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.505028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.505073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.521252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.521321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.536618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.536667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.552521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.552565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.563163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.543 [2024-10-01 13:45:40.563204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.543 [2024-10-01 13:45:40.578364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.578541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.595058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.595098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.611451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.611494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.628699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.628742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.645228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.645404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.662612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.662656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.679069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.679110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.689858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.689902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.544 [2024-10-01 13:45:40.706156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.544 [2024-10-01 13:45:40.706200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.721572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.721613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.738069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.738113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.748439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.748518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.761669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.761712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.776547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.776591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.793643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.793685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 10692.00 IOPS, 83.53 MiB/s [2024-10-01 13:45:40.809660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.809702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.820314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.820357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.835218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.835271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.850886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.851095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.861561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.861607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.874560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.874727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.889523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.889696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.900583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.900744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.916190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.916233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.930701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.930752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.945713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.945758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.962032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.962075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.802 [2024-10-01 13:45:40.978262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.802 [2024-10-01 13:45:40.978306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:40.995394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:40.995560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.012473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.012515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.028691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.028746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.045893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.045954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.063047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.063096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.079086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.079130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.098071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.098116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.112286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.112330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.127353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.127406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-10-01 13:45:41.143053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-10-01 13:45:41.143096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.061 [2024-10-01 13:45:41.153485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.061 [2024-10-01 13:45:41.153654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.061 [2024-10-01 13:45:41.166357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.061 [2024-10-01 13:45:41.166399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.061 [2024-10-01 13:45:41.181814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.061 [2024-10-01 13:45:41.181858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.061 [2024-10-01 13:45:41.197893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.061 [2024-10-01 13:45:41.197949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.061 [2024-10-01 13:45:41.214285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.061 [2024-10-01 13:45:41.214339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.061 [2024-10-01 13:45:41.224591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.061 [2024-10-01 13:45:41.224774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.061 [2024-10-01 13:45:41.237727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.061 [2024-10-01 13:45:41.237776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.252621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.252668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.263130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.263298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.278855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.279067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.294855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.295057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.310512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.310692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.321143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.321315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.334415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.334458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.346222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.346265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.361705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.361747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.378328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.378373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.387646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.387690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.403839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.403883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.420127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.420315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.436846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.436892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.454726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.454778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.469349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.469395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.321 [2024-10-01 13:45:41.485040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.321 [2024-10-01 13:45:41.485081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.503021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.503067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.518500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.518680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.534848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.534893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.552172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.552218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.566960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.567003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.577103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.577145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.592242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.592434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.609084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.609124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.624884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.624936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.642366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.642407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.657532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.657571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.667259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.667447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.683274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.683316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.699655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.699696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.717825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.717879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.732944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.732987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.742832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.581 [2024-10-01 13:45:41.743054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.581 [2024-10-01 13:45:41.758399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.758571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.773828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.774075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.783576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.783617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.799460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.799519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 10833.00 IOPS, 84.63 MiB/s [2024-10-01 13:45:41.815520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.815563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.834164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.834211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.849477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.849709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.867778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.867840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.882769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.882822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.892940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.892988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.908429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.908661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.923640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.923800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.939813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.939855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.956349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.956389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.973811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.973851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:41.989948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:41.989987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.841 [2024-10-01 13:45:42.006017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.841 [2024-10-01 13:45:42.006057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.023846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.023888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.038787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.038829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.048358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.048398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.063386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.063426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.079735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.079776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.096944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.096981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.112845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.112884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.122539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.122578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.138426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.138468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.155481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.155521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.173082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.173120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.187878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.187933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.203609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.203649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.220021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.220056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.237286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.237326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.252117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.252158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.268529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.268575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.136 [2024-10-01 13:45:42.284713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.136 [2024-10-01 13:45:42.284755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.302095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.302133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.318603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.318646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.335009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.335047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.352191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.352230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.368056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.368093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.378231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.378417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.394117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.394167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.410442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.410483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.426977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.427016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.443434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.443474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.459778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.459817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.478168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.478206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.492591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.492629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.508102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.508141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.527560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.527599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.542383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.542424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.559894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.559948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.574962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.575001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.422 [2024-10-01 13:45:42.584022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.422 [2024-10-01 13:45:42.584059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.600433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.600474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.616844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.616884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.635832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.635873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.650957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.650997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.665974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.666015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.675772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.675824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.687808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.687856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.704926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.704970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.722570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.722609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.738153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.738191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.755923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.681 [2024-10-01 13:45:42.755961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.681 [2024-10-01 13:45:42.770785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.682 [2024-10-01 13:45:42.770827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.682 [2024-10-01 13:45:42.781036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.682 [2024-10-01 13:45:42.781076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.682 [2024-10-01 13:45:42.796274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.682 [2024-10-01 13:45:42.796313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.682 11081.67 IOPS, 86.58 MiB/s [2024-10-01 13:45:42.812995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.682 [2024-10-01 13:45:42.813031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.682 [2024-10-01 13:45:42.829699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.682 [2024-10-01 13:45:42.829737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.682 [2024-10-01 13:45:42.847251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.682 [2024-10-01 13:45:42.847427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.861977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.862016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.877409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.877573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.886928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.886965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.903334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.903375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.912759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.912937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.928967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.929003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.945243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.945281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.940 [2024-10-01 13:45:42.962969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.940 [2024-10-01 13:45:42.963008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:42.979883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:42.979937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:42.996106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:42.996267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:43.012298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:43.012338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:43.030755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:43.030796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:43.045881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:43.045931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:43.063263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:43.063423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:43.079604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:43.079645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:43.096782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:43.096823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.941 [2024-10-01 13:45:43.112497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.941 [2024-10-01 13:45:43.112539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.122088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.122129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.137960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.137998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.154250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.154289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.170831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.170873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.188585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.188625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.203573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.203614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.213000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.213039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.229490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.229530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.245879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.245931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.265016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.265055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.279847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.279888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.289698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.289739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.302870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.302928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.313674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.313714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.324744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.324925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.335700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.335742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.352845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.353016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.199 [2024-10-01 13:45:43.369416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.199 [2024-10-01 13:45:43.369455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.457 [2024-10-01 13:45:43.386269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.457 [2024-10-01 13:45:43.386308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.457 [2024-10-01 13:45:43.402386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.457 [2024-10-01 13:45:43.402427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.457 [2024-10-01 13:45:43.419600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.419638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.435261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.435303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.444692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.444865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.460629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.460670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.478684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.478728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.493935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.493974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.503460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.503621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.519393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.519435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.528871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.528928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.544581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.544618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.561819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.561858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.578822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.579005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.596476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.596517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.610871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.611047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.626229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.626269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.458 [2024-10-01 13:45:43.635338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.458 [2024-10-01 13:45:43.635382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.647296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.647341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.664178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.664216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.681703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.681871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.696839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.696881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.706878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.706931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.718817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.718858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.729933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.729971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.740657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.740698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.751492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.751659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.764701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.764743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.781275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.781315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.797669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.797708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 11214.25 IOPS, 87.61 MiB/s [2024-10-01 13:45:43.813441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.813483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.831208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.831248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.846037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.846077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.861763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.861940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.879353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.879394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.717 [2024-10-01 13:45:43.894041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.717 [2024-10-01 13:45:43.894081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.976 [2024-10-01 13:45:43.909486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.976 [2024-10-01 13:45:43.909652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.976 [2024-10-01 13:45:43.927573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.976 [2024-10-01 13:45:43.927615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.976 [2024-10-01 13:45:43.942459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.976 [2024-10-01 13:45:43.942499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.976 [2024-10-01 13:45:43.952469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.976 [2024-10-01 13:45:43.952634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:43.967961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:43.968002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:43.984748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:43.984789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.001517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.001555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.018191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.018232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.028211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.028251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.039854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.040041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.050895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.050950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.068732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.068771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.085239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.085278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.102739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.102780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.118300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.118350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.136321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.136360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.977 [2024-10-01 13:45:44.151176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.977 [2024-10-01 13:45:44.151339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.166647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.166815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.182682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.182722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.192946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.192983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.208204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.208378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.225057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.225093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.242444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.242484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.257487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.257527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.273179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.273219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.289563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.289607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.306080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.306120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.324378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.324420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.339611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.339785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.357740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.357780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.372688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.372727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.390465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.390627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.235 [2024-10-01 13:45:44.405523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.235 [2024-10-01 13:45:44.405679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.421317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.421357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.438573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.438614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.454451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.454491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.472635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.472800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.487789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.487966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.497978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.498015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.512952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.512990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.530426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.530467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.545035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.545073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.560897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.560956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.579101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.579140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.594279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.594457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.611766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.611806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.628782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.628821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.643858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.643899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.653437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.653478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.494 [2024-10-01 13:45:44.669638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.494 [2024-10-01 13:45:44.669677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.687093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.687133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.704007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.704045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.719553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.719724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.729424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.729463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.745276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.745315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.762702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.762870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.777818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.777988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.787941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.787980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.803393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.803435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 11277.40 IOPS, 88.10 MiB/s [2024-10-01 13:45:44.813129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.813169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 00:11:34.753 Latency(us) 00:11:34.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.753 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:34.753 Nvme1n1 : 5.01 11281.28 88.14 0.00 0.00 11332.29 4766.25 19899.11 00:11:34.753 =================================================================================================================== 00:11:34.753 Total : 11281.28 88.14 0.00 0.00 11332.29 4766.25 19899.11 00:11:34.753 [2024-10-01 13:45:44.822985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.823025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.830970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.831003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.842998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.843043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.753 [2024-10-01 13:45:44.855013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.753 [2024-10-01 13:45:44.855058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.754 [2024-10-01 13:45:44.867012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.754 [2024-10-01 13:45:44.867055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.754 [2024-10-01 13:45:44.879006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.754 [2024-10-01 13:45:44.879050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.754 [2024-10-01 13:45:44.891021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.754 [2024-10-01 13:45:44.891067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.754 [2024-10-01 13:45:44.903024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.754 [2024-10-01 13:45:44.903067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.754 [2024-10-01 13:45:44.915027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.754 [2024-10-01 13:45:44.915070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.754 [2024-10-01 13:45:44.927020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.754 [2024-10-01 13:45:44.927063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:44.939023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:44.939064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:44.951015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:44.951066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:44.963013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:44.963046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:44.975026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:44.975071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:44.987041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:44.987081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:44.999020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:44.999056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:45.011036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:45.011076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:45.023041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:45.023080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:45.035028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:45.035060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 [2024-10-01 13:45:45.047050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.013 [2024-10-01 13:45:45.047087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.013 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65965) - No such process 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65965 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.013 delay0 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.013 13:45:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:35.272 [2024-10-01 13:45:45.253660] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:41.846 Initializing NVMe Controllers 00:11:41.846 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:41.846 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:41.846 Initialization complete. Launching workers. 00:11:41.846 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 53 00:11:41.846 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 340, failed to submit 33 00:11:41.846 success 191, unsuccessful 149, failed 0 00:11:41.846 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:41.846 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:41.846 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:41.846 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:41.846 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.846 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.847 rmmod nvme_tcp 00:11:41.847 rmmod nvme_fabrics 00:11:41.847 rmmod nvme_keyring 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 65809 ']' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 65809 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65809 ']' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65809 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65809 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:41.847 killing process with pid 65809 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65809' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65809 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65809 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:41.847 00:11:41.847 real 0m25.310s 00:11:41.847 user 0m40.770s 00:11:41.847 sys 0m7.064s 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.847 ************************************ 00:11:41.847 END TEST nvmf_zcopy 00:11:41.847 ************************************ 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:41.847 ************************************ 00:11:41.847 START TEST nvmf_nmic 00:11:41.847 ************************************ 00:11:41.847 13:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:42.108 * Looking for test storage... 00:11:42.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.108 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:42.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.109 --rc genhtml_branch_coverage=1 00:11:42.109 --rc genhtml_function_coverage=1 00:11:42.109 --rc genhtml_legend=1 00:11:42.109 --rc geninfo_all_blocks=1 00:11:42.109 --rc geninfo_unexecuted_blocks=1 00:11:42.109 00:11:42.109 ' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:42.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.109 --rc genhtml_branch_coverage=1 00:11:42.109 --rc genhtml_function_coverage=1 00:11:42.109 --rc genhtml_legend=1 00:11:42.109 --rc geninfo_all_blocks=1 00:11:42.109 --rc geninfo_unexecuted_blocks=1 00:11:42.109 00:11:42.109 ' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:42.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.109 --rc genhtml_branch_coverage=1 00:11:42.109 --rc genhtml_function_coverage=1 00:11:42.109 --rc genhtml_legend=1 00:11:42.109 --rc geninfo_all_blocks=1 00:11:42.109 --rc geninfo_unexecuted_blocks=1 00:11:42.109 00:11:42.109 ' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:42.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.109 --rc genhtml_branch_coverage=1 00:11:42.109 --rc genhtml_function_coverage=1 00:11:42.109 --rc genhtml_legend=1 00:11:42.109 --rc geninfo_all_blocks=1 00:11:42.109 --rc geninfo_unexecuted_blocks=1 00:11:42.109 00:11:42.109 ' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.109 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:42.109 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:42.110 Cannot find device "nvmf_init_br" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:42.110 Cannot find device "nvmf_init_br2" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:42.110 Cannot find device "nvmf_tgt_br" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.110 Cannot find device "nvmf_tgt_br2" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:42.110 Cannot find device "nvmf_init_br" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:42.110 Cannot find device "nvmf_init_br2" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:42.110 Cannot find device "nvmf_tgt_br" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:42.110 Cannot find device "nvmf_tgt_br2" 00:11:42.110 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:42.382 Cannot find device "nvmf_br" 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:42.382 Cannot find device "nvmf_init_if" 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:42.382 Cannot find device "nvmf_init_if2" 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:42.382 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:42.383 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:42.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.174 ms 00:11:42.643 00:11:42.643 --- 10.0.0.3 ping statistics --- 00:11:42.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.643 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:42.643 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:42.643 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:11:42.643 00:11:42.643 --- 10.0.0.4 ping statistics --- 00:11:42.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.643 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:42.643 00:11:42.643 --- 10.0.0.1 ping statistics --- 00:11:42.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.643 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:42.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:42.643 00:11:42.643 --- 10.0.0.2 ping statistics --- 00:11:42.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.643 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.643 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=66342 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 66342 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 66342 ']' 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.644 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.644 [2024-10-01 13:45:52.706194] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:42.644 [2024-10-01 13:45:52.706300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.903 [2024-10-01 13:45:52.849132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.903 [2024-10-01 13:45:52.979700] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.903 [2024-10-01 13:45:52.979770] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.903 [2024-10-01 13:45:52.979785] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.903 [2024-10-01 13:45:52.979795] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.903 [2024-10-01 13:45:52.979804] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.903 [2024-10-01 13:45:52.979947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.903 [2024-10-01 13:45:52.980710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.903 [2024-10-01 13:45:52.980743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.903 [2024-10-01 13:45:52.980041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.903 [2024-10-01 13:45:53.037697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 [2024-10-01 13:45:53.810823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 Malloc0 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 [2024-10-01 13:45:53.881380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.838 test case1: single bdev can't be used in multiple subsystems 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:43.838 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.839 [2024-10-01 13:45:53.905045] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:43.839 [2024-10-01 13:45:53.905159] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:43.839 [2024-10-01 13:45:53.905179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.839 request: 00:11:43.839 { 00:11:43.839 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:43.839 "namespace": { 00:11:43.839 "bdev_name": "Malloc0", 00:11:43.839 "no_auto_visible": false 00:11:43.839 }, 00:11:43.839 "method": "nvmf_subsystem_add_ns", 00:11:43.839 "req_id": 1 00:11:43.839 } 00:11:43.839 Got JSON-RPC error response 00:11:43.839 response: 00:11:43.839 { 00:11:43.839 "code": -32602, 00:11:43.839 "message": "Invalid parameters" 00:11:43.839 } 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:43.839 Adding namespace failed - expected result. 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:43.839 test case2: host connect to nvmf target in multiple paths 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.839 [2024-10-01 13:45:53.917370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.839 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid=88f52f68-80e5-4327-8a21-70d63145da24 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:44.097 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid=88f52f68-80e5-4327-8a21-70d63145da24 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:44.097 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.097 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:44.097 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.097 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:44.097 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:46.628 13:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:46.628 13:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:46.628 13:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.628 13:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:46.628 13:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.628 13:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:46.628 13:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:46.628 [global] 00:11:46.628 thread=1 00:11:46.628 invalidate=1 00:11:46.628 rw=write 00:11:46.628 time_based=1 00:11:46.628 runtime=1 00:11:46.628 ioengine=libaio 00:11:46.628 direct=1 00:11:46.628 bs=4096 00:11:46.628 iodepth=1 00:11:46.628 norandommap=0 00:11:46.628 numjobs=1 00:11:46.628 00:11:46.628 verify_dump=1 00:11:46.628 verify_backlog=512 00:11:46.628 verify_state_save=0 00:11:46.628 do_verify=1 00:11:46.628 verify=crc32c-intel 00:11:46.628 [job0] 00:11:46.628 filename=/dev/nvme0n1 00:11:46.628 Could not set queue depth (nvme0n1) 00:11:46.628 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.628 fio-3.35 00:11:46.628 Starting 1 thread 00:11:47.565 00:11:47.565 job0: (groupid=0, jobs=1): err= 0: pid=66428: Tue Oct 1 13:45:57 2024 00:11:47.565 read: IOPS=2318, BW=9275KiB/s (9497kB/s)(9284KiB/1001msec) 00:11:47.565 slat (nsec): min=11704, max=38147, avg=13825.91, stdev=2729.88 00:11:47.565 clat (usec): min=155, max=327, avg=232.19, stdev=27.89 00:11:47.565 lat (usec): min=168, max=340, avg=246.02, stdev=27.69 00:11:47.565 clat percentiles (usec): 00:11:47.565 | 1.00th=[ 169], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 208], 00:11:47.565 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:11:47.565 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:11:47.565 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 322], 00:11:47.565 | 99.99th=[ 326] 00:11:47.565 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:47.565 slat (nsec): min=16482, max=69809, avg=20540.64, stdev=4175.71 00:11:47.565 clat (usec): min=94, max=2911, avg=143.44, stdev=88.03 00:11:47.565 lat (usec): min=112, max=2937, avg=163.98, stdev=88.49 00:11:47.565 clat percentiles (usec): 00:11:47.565 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 113], 20.00th=[ 122], 00:11:47.565 | 30.00th=[ 128], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:11:47.565 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 167], 95.00th=[ 176], 00:11:47.565 | 99.00th=[ 208], 99.50th=[ 277], 99.90th=[ 1696], 99.95th=[ 2474], 00:11:47.565 | 99.99th=[ 2900] 00:11:47.565 bw ( KiB/s): min=12048, max=12048, per=100.00%, avg=12048.00, stdev= 0.00, samples=1 00:11:47.565 iops : min= 3012, max= 3012, avg=3012.00, stdev= 0.00, samples=1 00:11:47.565 lat (usec) : 100=0.57%, 250=87.13%, 500=12.17%, 750=0.02%, 1000=0.02% 00:11:47.565 lat (msec) : 2=0.04%, 4=0.04% 00:11:47.565 cpu : usr=2.80%, sys=5.80%, ctx=4881, majf=0, minf=5 00:11:47.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.565 issued rwts: total=2321,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.565 00:11:47.566 Run status group 0 (all jobs): 00:11:47.566 READ: bw=9275KiB/s (9497kB/s), 9275KiB/s-9275KiB/s (9497kB/s-9497kB/s), io=9284KiB (9507kB), run=1001-1001msec 00:11:47.566 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:47.566 00:11:47.566 Disk stats (read/write): 00:11:47.566 nvme0n1: ios=2098/2344, merge=0/0, ticks=506/364, in_queue=870, util=91.28% 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.566 rmmod nvme_tcp 00:11:47.566 rmmod nvme_fabrics 00:11:47.566 rmmod nvme_keyring 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 66342 ']' 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 66342 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 66342 ']' 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 66342 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66342 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:47.566 killing process with pid 66342 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66342' 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 66342 00:11:47.566 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 66342 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:48.134 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:48.392 00:11:48.392 real 0m6.395s 00:11:48.392 user 0m19.496s 00:11:48.392 sys 0m2.158s 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.392 ************************************ 00:11:48.392 END TEST nvmf_nmic 00:11:48.392 ************************************ 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.392 ************************************ 00:11:48.392 START TEST nvmf_fio_target 00:11:48.392 ************************************ 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:48.392 * Looking for test storage... 00:11:48.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:48.392 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:48.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.652 --rc genhtml_branch_coverage=1 00:11:48.652 --rc genhtml_function_coverage=1 00:11:48.652 --rc genhtml_legend=1 00:11:48.652 --rc geninfo_all_blocks=1 00:11:48.652 --rc geninfo_unexecuted_blocks=1 00:11:48.652 00:11:48.652 ' 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:48.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.652 --rc genhtml_branch_coverage=1 00:11:48.652 --rc genhtml_function_coverage=1 00:11:48.652 --rc genhtml_legend=1 00:11:48.652 --rc geninfo_all_blocks=1 00:11:48.652 --rc geninfo_unexecuted_blocks=1 00:11:48.652 00:11:48.652 ' 00:11:48.652 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:48.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.652 --rc genhtml_branch_coverage=1 00:11:48.652 --rc genhtml_function_coverage=1 00:11:48.652 --rc genhtml_legend=1 00:11:48.652 --rc geninfo_all_blocks=1 00:11:48.653 --rc geninfo_unexecuted_blocks=1 00:11:48.653 00:11:48.653 ' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:48.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.653 --rc genhtml_branch_coverage=1 00:11:48.653 --rc genhtml_function_coverage=1 00:11:48.653 --rc genhtml_legend=1 00:11:48.653 --rc geninfo_all_blocks=1 00:11:48.653 --rc geninfo_unexecuted_blocks=1 00:11:48.653 00:11:48.653 ' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.653 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:48.653 Cannot find device "nvmf_init_br" 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:48.653 Cannot find device "nvmf_init_br2" 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:48.653 Cannot find device "nvmf_tgt_br" 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:48.653 Cannot find device "nvmf_tgt_br2" 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:48.653 Cannot find device "nvmf_init_br" 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:48.653 Cannot find device "nvmf_init_br2" 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:48.653 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:48.653 Cannot find device "nvmf_tgt_br" 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:48.654 Cannot find device "nvmf_tgt_br2" 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:48.654 Cannot find device "nvmf_br" 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:48.654 Cannot find device "nvmf_init_if" 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:48.654 Cannot find device "nvmf_init_if2" 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:48.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:48.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:48.654 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:48.913 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:48.914 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:48.914 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:48.914 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:48.914 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:48.914 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:48.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:48.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:11:48.914 00:11:48.914 --- 10.0.0.3 ping statistics --- 00:11:48.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.914 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:48.914 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:48.914 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:11:48.914 00:11:48.914 --- 10.0.0.4 ping statistics --- 00:11:48.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.914 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:48.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:48.914 00:11:48.914 --- 10.0.0.1 ping statistics --- 00:11:48.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.914 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:48.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:48.914 00:11:48.914 --- 10.0.0.2 ping statistics --- 00:11:48.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.914 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=66666 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 66666 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 66666 ']' 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.914 13:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.172 [2024-10-01 13:45:59.133716] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:49.172 [2024-10-01 13:45:59.133807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.172 [2024-10-01 13:45:59.273373] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.430 [2024-10-01 13:45:59.454602] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.430 [2024-10-01 13:45:59.454679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.430 [2024-10-01 13:45:59.454694] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.430 [2024-10-01 13:45:59.454706] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.430 [2024-10-01 13:45:59.454715] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.430 [2024-10-01 13:45:59.454874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.430 [2024-10-01 13:45:59.455032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.430 [2024-10-01 13:45:59.455652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.430 [2024-10-01 13:45:59.455691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.430 [2024-10-01 13:45:59.536023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.364 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.364 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:50.364 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:50.364 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.364 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.364 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.364 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:50.623 [2024-10-01 13:46:00.581132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.623 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.881 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:50.881 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:51.140 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:51.140 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:51.397 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:51.397 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:51.964 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:51.964 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:51.964 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:52.530 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:52.530 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:52.788 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:52.788 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:53.108 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:53.108 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:53.365 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.623 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:53.623 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.881 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:53.881 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.139 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:54.398 [2024-10-01 13:46:04.541420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:54.398 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:54.964 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:55.222 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid=88f52f68-80e5-4327-8a21-70d63145da24 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:55.222 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:55.222 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:55.222 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.222 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:55.222 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:55.222 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.749 13:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.749 13:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.749 13:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.749 13:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:57.749 13:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.749 13:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:57.749 13:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:57.749 [global] 00:11:57.749 thread=1 00:11:57.749 invalidate=1 00:11:57.749 rw=write 00:11:57.749 time_based=1 00:11:57.749 runtime=1 00:11:57.749 ioengine=libaio 00:11:57.749 direct=1 00:11:57.749 bs=4096 00:11:57.749 iodepth=1 00:11:57.749 norandommap=0 00:11:57.749 numjobs=1 00:11:57.749 00:11:57.749 verify_dump=1 00:11:57.749 verify_backlog=512 00:11:57.749 verify_state_save=0 00:11:57.749 do_verify=1 00:11:57.749 verify=crc32c-intel 00:11:57.749 [job0] 00:11:57.749 filename=/dev/nvme0n1 00:11:57.749 [job1] 00:11:57.749 filename=/dev/nvme0n2 00:11:57.749 [job2] 00:11:57.749 filename=/dev/nvme0n3 00:11:57.749 [job3] 00:11:57.749 filename=/dev/nvme0n4 00:11:57.749 Could not set queue depth (nvme0n1) 00:11:57.749 Could not set queue depth (nvme0n2) 00:11:57.749 Could not set queue depth (nvme0n3) 00:11:57.749 Could not set queue depth (nvme0n4) 00:11:57.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:57.749 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:57.750 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:57.750 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:57.750 fio-3.35 00:11:57.750 Starting 4 threads 00:11:58.682 00:11:58.682 job0: (groupid=0, jobs=1): err= 0: pid=66857: Tue Oct 1 13:46:08 2024 00:11:58.682 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:58.682 slat (nsec): min=8148, max=51538, avg=11500.23, stdev=3809.07 00:11:58.682 clat (usec): min=162, max=344, avg=234.32, stdev=18.36 00:11:58.682 lat (usec): min=174, max=356, avg=245.82, stdev=19.34 00:11:58.682 clat percentiles (usec): 00:11:58.682 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:11:58.682 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:11:58.682 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 262], 00:11:58.682 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 343], 99.95th=[ 343], 00:11:58.682 | 99.99th=[ 347] 00:11:58.682 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(9.85MiB/1001msec); 0 zone resets 00:11:58.682 slat (nsec): min=13129, max=90390, avg=19506.94, stdev=4166.02 00:11:58.682 clat (usec): min=87, max=753, avg=174.45, stdev=26.82 00:11:58.682 lat (usec): min=106, max=843, avg=193.96, stdev=27.98 00:11:58.682 clat percentiles (usec): 00:11:58.682 | 1.00th=[ 119], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:11:58.682 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:11:58.682 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 208], 00:11:58.682 | 99.00th=[ 233], 99.50th=[ 255], 99.90th=[ 545], 99.95th=[ 717], 00:11:58.682 | 99.99th=[ 750] 00:11:58.682 bw ( KiB/s): min= 9747, max= 9747, per=26.65%, avg=9747.00, stdev= 0.00, samples=1 00:11:58.682 iops : min= 2436, max= 2436, avg=2436.00, stdev= 0.00, samples=1 00:11:58.682 lat (usec) : 100=0.07%, 250=93.63%, 500=6.24%, 750=0.04%, 1000=0.02% 00:11:58.682 cpu : usr=1.90%, sys=6.10%, ctx=4571, majf=0, minf=7 00:11:58.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.682 issued rwts: total=2048,2522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.682 job1: (groupid=0, jobs=1): err= 0: pid=66858: Tue Oct 1 13:46:08 2024 00:11:58.682 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:58.682 slat (nsec): min=9913, max=33836, avg=12749.66, stdev=1984.55 00:11:58.682 clat (usec): min=192, max=347, avg=233.03, stdev=19.32 00:11:58.682 lat (usec): min=206, max=361, avg=245.78, stdev=19.49 00:11:58.682 clat percentiles (usec): 00:11:58.682 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:11:58.682 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:11:58.682 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 262], 00:11:58.682 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 343], 00:11:58.682 | 99.99th=[ 347] 00:11:58.682 write: IOPS=2530, BW=9.88MiB/s (10.4MB/s)(9.89MiB/1001msec); 0 zone resets 00:11:58.682 slat (usec): min=10, max=693, avg=16.80, stdev=14.44 00:11:58.682 clat (usec): min=98, max=373, avg=176.56, stdev=22.22 00:11:58.682 lat (usec): min=122, max=870, avg=193.36, stdev=25.96 00:11:58.682 clat percentiles (usec): 00:11:58.682 | 1.00th=[ 113], 5.00th=[ 137], 10.00th=[ 155], 20.00th=[ 163], 00:11:58.682 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:11:58.682 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 210], 00:11:58.682 | 99.00th=[ 237], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 293], 00:11:58.682 | 99.99th=[ 375] 00:11:58.682 bw ( KiB/s): min= 9872, max= 9872, per=27.00%, avg=9872.00, stdev= 0.00, samples=1 00:11:58.682 iops : min= 2468, max= 2468, avg=2468.00, stdev= 0.00, samples=1 00:11:58.682 lat (usec) : 100=0.02%, 250=93.84%, 500=6.13% 00:11:58.682 cpu : usr=1.10%, sys=6.40%, ctx=4584, majf=0, minf=9 00:11:58.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.682 issued rwts: total=2048,2533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.682 job2: (groupid=0, jobs=1): err= 0: pid=66859: Tue Oct 1 13:46:08 2024 00:11:58.682 read: IOPS=1680, BW=6721KiB/s (6883kB/s)(6728KiB/1001msec) 00:11:58.682 slat (nsec): min=12717, max=51809, avg=17341.33, stdev=4386.92 00:11:58.682 clat (usec): min=157, max=654, avg=293.25, stdev=57.80 00:11:58.682 lat (usec): min=171, max=677, avg=310.59, stdev=59.63 00:11:58.682 clat percentiles (usec): 00:11:58.682 | 1.00th=[ 174], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 265], 00:11:58.682 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:11:58.682 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 420], 00:11:58.682 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 652], 00:11:58.682 | 99.99th=[ 652] 00:11:58.682 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:58.682 slat (nsec): min=17685, max=86150, avg=24206.16, stdev=7103.51 00:11:58.682 clat (usec): min=105, max=2171, avg=205.20, stdev=57.23 00:11:58.682 lat (usec): min=125, max=2213, avg=229.40, stdev=58.04 00:11:58.682 clat percentiles (usec): 00:11:58.682 | 1.00th=[ 117], 5.00th=[ 137], 10.00th=[ 176], 20.00th=[ 190], 00:11:58.682 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 208], 00:11:58.682 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 245], 00:11:58.682 | 99.00th=[ 318], 99.50th=[ 388], 99.90th=[ 619], 99.95th=[ 938], 00:11:58.682 | 99.99th=[ 2180] 00:11:58.682 bw ( KiB/s): min= 8192, max= 8192, per=22.40%, avg=8192.00, stdev= 0.00, samples=1 00:11:58.682 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:58.682 lat (usec) : 250=55.25%, 500=43.40%, 750=1.29%, 1000=0.03% 00:11:58.682 lat (msec) : 4=0.03% 00:11:58.682 cpu : usr=2.00%, sys=5.90%, ctx=3730, majf=0, minf=7 00:11:58.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.682 issued rwts: total=1682,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.682 job3: (groupid=0, jobs=1): err= 0: pid=66860: Tue Oct 1 13:46:08 2024 00:11:58.682 read: IOPS=1645, BW=6581KiB/s (6739kB/s)(6588KiB/1001msec) 00:11:58.682 slat (nsec): min=11740, max=56868, avg=15285.87, stdev=3785.03 00:11:58.682 clat (usec): min=196, max=5077, avg=298.50, stdev=153.35 00:11:58.682 lat (usec): min=216, max=5099, avg=313.78, stdev=153.83 00:11:58.683 clat percentiles (usec): 00:11:58.683 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:11:58.683 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:11:58.683 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 392], 00:11:58.683 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 3752], 99.95th=[ 5080], 00:11:58.683 | 99.99th=[ 5080] 00:11:58.683 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:58.683 slat (usec): min=16, max=124, avg=22.14, stdev= 6.32 00:11:58.683 clat (usec): min=109, max=2368, avg=210.44, stdev=60.90 00:11:58.683 lat (usec): min=130, max=2395, avg=232.57, stdev=62.60 00:11:58.683 clat percentiles (usec): 00:11:58.683 | 1.00th=[ 125], 5.00th=[ 143], 10.00th=[ 184], 20.00th=[ 194], 00:11:58.683 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:11:58.683 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 249], 00:11:58.683 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 486], 99.95th=[ 586], 00:11:58.683 | 99.99th=[ 2376] 00:11:58.683 bw ( KiB/s): min= 8192, max= 8192, per=22.40%, avg=8192.00, stdev= 0.00, samples=1 00:11:58.683 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:58.683 lat (usec) : 250=53.67%, 500=45.90%, 750=0.27%, 1000=0.08% 00:11:58.683 lat (msec) : 4=0.05%, 10=0.03% 00:11:58.683 cpu : usr=0.90%, sys=6.20%, ctx=3703, majf=0, minf=14 00:11:58.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.683 issued rwts: total=1647,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.683 00:11:58.683 Run status group 0 (all jobs): 00:11:58.683 READ: bw=29.0MiB/s (30.4MB/s), 6581KiB/s-8184KiB/s (6739kB/s-8380kB/s), io=29.0MiB (30.4MB), run=1001-1001msec 00:11:58.683 WRITE: bw=35.7MiB/s (37.4MB/s), 8184KiB/s-9.88MiB/s (8380kB/s-10.4MB/s), io=35.7MiB (37.5MB), run=1001-1001msec 00:11:58.683 00:11:58.683 Disk stats (read/write): 00:11:58.683 nvme0n1: ios=1897/2048, merge=0/0, ticks=434/369, in_queue=803, util=87.68% 00:11:58.683 nvme0n2: ios=1898/2048, merge=0/0, ticks=458/338, in_queue=796, util=87.96% 00:11:58.683 nvme0n3: ios=1536/1659, merge=0/0, ticks=442/360, in_queue=802, util=88.82% 00:11:58.683 nvme0n4: ios=1536/1568, merge=0/0, ticks=464/343, in_queue=807, util=89.12% 00:11:58.683 13:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:58.683 [global] 00:11:58.683 thread=1 00:11:58.683 invalidate=1 00:11:58.683 rw=randwrite 00:11:58.683 time_based=1 00:11:58.683 runtime=1 00:11:58.683 ioengine=libaio 00:11:58.683 direct=1 00:11:58.683 bs=4096 00:11:58.683 iodepth=1 00:11:58.683 norandommap=0 00:11:58.683 numjobs=1 00:11:58.683 00:11:58.683 verify_dump=1 00:11:58.683 verify_backlog=512 00:11:58.683 verify_state_save=0 00:11:58.683 do_verify=1 00:11:58.683 verify=crc32c-intel 00:11:58.683 [job0] 00:11:58.683 filename=/dev/nvme0n1 00:11:58.683 [job1] 00:11:58.683 filename=/dev/nvme0n2 00:11:58.683 [job2] 00:11:58.683 filename=/dev/nvme0n3 00:11:58.683 [job3] 00:11:58.683 filename=/dev/nvme0n4 00:11:58.683 Could not set queue depth (nvme0n1) 00:11:58.683 Could not set queue depth (nvme0n2) 00:11:58.683 Could not set queue depth (nvme0n3) 00:11:58.683 Could not set queue depth (nvme0n4) 00:11:58.939 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:58.940 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:58.940 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:58.940 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:58.940 fio-3.35 00:11:58.940 Starting 4 threads 00:12:00.310 00:12:00.310 job0: (groupid=0, jobs=1): err= 0: pid=66924: Tue Oct 1 13:46:10 2024 00:12:00.310 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:00.310 slat (nsec): min=8274, max=76679, avg=13234.80, stdev=5101.48 00:12:00.310 clat (usec): min=219, max=711, avg=351.35, stdev=68.42 00:12:00.310 lat (usec): min=228, max=722, avg=364.59, stdev=69.13 00:12:00.310 clat percentiles (usec): 00:12:00.310 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 310], 00:12:00.310 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 363], 00:12:00.310 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 469], 00:12:00.310 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 668], 99.95th=[ 709], 00:12:00.310 | 99.99th=[ 709] 00:12:00.310 write: IOPS=1665, BW=6661KiB/s (6821kB/s)(6668KiB/1001msec); 0 zone resets 00:12:00.310 slat (nsec): min=11110, max=91369, avg=21543.33, stdev=6522.41 00:12:00.310 clat (usec): min=128, max=476, avg=238.74, stdev=39.79 00:12:00.310 lat (usec): min=163, max=506, avg=260.29, stdev=41.21 00:12:00.310 clat percentiles (usec): 00:12:00.310 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:12:00.310 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 239], 60.00th=[ 253], 00:12:00.310 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:12:00.310 | 99.00th=[ 363], 99.50th=[ 404], 99.90th=[ 461], 99.95th=[ 478], 00:12:00.310 | 99.99th=[ 478] 00:12:00.310 bw ( KiB/s): min= 8192, max= 8192, per=25.85%, avg=8192.00, stdev= 0.00, samples=1 00:12:00.310 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:00.310 lat (usec) : 250=33.62%, 500=64.63%, 750=1.75% 00:12:00.310 cpu : usr=1.60%, sys=4.70%, ctx=3204, majf=0, minf=7 00:12:00.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.310 issued rwts: total=1536,1667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.310 job1: (groupid=0, jobs=1): err= 0: pid=66925: Tue Oct 1 13:46:10 2024 00:12:00.310 read: IOPS=3061, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1001msec) 00:12:00.310 slat (nsec): min=10700, max=56841, avg=14421.36, stdev=5303.57 00:12:00.310 clat (usec): min=132, max=397, avg=168.98, stdev=41.36 00:12:00.310 lat (usec): min=144, max=410, avg=183.40, stdev=41.91 00:12:00.310 clat percentiles (usec): 00:12:00.310 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:12:00.310 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:12:00.310 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 192], 95.00th=[ 306], 00:12:00.310 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 359], 99.95th=[ 363], 00:12:00.310 | 99.99th=[ 396] 00:12:00.310 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:00.310 slat (usec): min=13, max=117, avg=22.53, stdev= 8.93 00:12:00.310 clat (usec): min=83, max=2104, avg=115.81, stdev=39.49 00:12:00.310 lat (usec): min=103, max=2122, avg=138.34, stdev=41.16 00:12:00.310 clat percentiles (usec): 00:12:00.310 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 103], 00:12:00.310 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 118], 00:12:00.310 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 135], 95.00th=[ 139], 00:12:00.310 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 219], 99.95th=[ 586], 00:12:00.310 | 99.99th=[ 2114] 00:12:00.310 bw ( KiB/s): min=12288, max=12288, per=38.78%, avg=12288.00, stdev= 0.00, samples=1 00:12:00.310 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:00.310 lat (usec) : 100=6.78%, 250=90.24%, 500=2.95%, 750=0.02% 00:12:00.310 lat (msec) : 4=0.02% 00:12:00.310 cpu : usr=2.30%, sys=9.60%, ctx=6140, majf=0, minf=15 00:12:00.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.310 issued rwts: total=3065,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.310 job2: (groupid=0, jobs=1): err= 0: pid=66926: Tue Oct 1 13:46:10 2024 00:12:00.310 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:00.310 slat (nsec): min=8457, max=86246, avg=16945.78, stdev=5773.77 00:12:00.310 clat (usec): min=215, max=688, avg=347.17, stdev=67.21 00:12:00.310 lat (usec): min=228, max=702, avg=364.11, stdev=68.95 00:12:00.310 clat percentiles (usec): 00:12:00.310 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 310], 00:12:00.310 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:12:00.310 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[ 465], 00:12:00.310 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 635], 99.95th=[ 693], 00:12:00.310 | 99.99th=[ 693] 00:12:00.310 write: IOPS=1667, BW=6669KiB/s (6829kB/s)(6676KiB/1001msec); 0 zone resets 00:12:00.310 slat (nsec): min=13080, max=79913, avg=19898.66, stdev=5404.05 00:12:00.310 clat (usec): min=103, max=489, avg=240.26, stdev=41.66 00:12:00.310 lat (usec): min=129, max=506, avg=260.16, stdev=41.69 00:12:00.310 clat percentiles (usec): 00:12:00.310 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:12:00.310 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 241], 60.00th=[ 255], 00:12:00.310 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:12:00.310 | 99.00th=[ 351], 99.50th=[ 416], 99.90th=[ 486], 99.95th=[ 490], 00:12:00.310 | 99.99th=[ 490] 00:12:00.310 bw ( KiB/s): min= 8192, max= 8192, per=25.85%, avg=8192.00, stdev= 0.00, samples=1 00:12:00.310 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:00.310 lat (usec) : 250=33.57%, 500=64.74%, 750=1.68% 00:12:00.310 cpu : usr=1.70%, sys=5.10%, ctx=3205, majf=0, minf=9 00:12:00.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.310 issued rwts: total=1536,1669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.310 job3: (groupid=0, jobs=1): err= 0: pid=66927: Tue Oct 1 13:46:10 2024 00:12:00.310 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:00.310 slat (nsec): min=13780, max=90990, avg=28007.69, stdev=9132.12 00:12:00.310 clat (usec): min=281, max=3308, avg=455.42, stdev=171.20 00:12:00.310 lat (usec): min=330, max=3325, avg=483.43, stdev=174.65 00:12:00.310 clat percentiles (usec): 00:12:00.310 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 351], 00:12:00.311 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 437], 00:12:00.311 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 676], 95.00th=[ 701], 00:12:00.311 | 99.00th=[ 750], 99.50th=[ 898], 99.90th=[ 2671], 99.95th=[ 3294], 00:12:00.311 | 99.99th=[ 3294] 00:12:00.311 write: IOPS=1520, BW=6082KiB/s (6228kB/s)(6088KiB/1001msec); 0 zone resets 00:12:00.311 slat (usec): min=13, max=2075, avg=36.38, stdev=54.75 00:12:00.311 clat (usec): min=4, max=6374, avg=289.82, stdev=211.31 00:12:00.311 lat (usec): min=133, max=6408, avg=326.20, stdev=222.32 00:12:00.311 clat percentiles (usec): 00:12:00.311 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 155], 00:12:00.311 | 30.00th=[ 184], 40.00th=[ 249], 50.00th=[ 285], 60.00th=[ 306], 00:12:00.311 | 70.00th=[ 334], 80.00th=[ 400], 90.00th=[ 465], 95.00th=[ 486], 00:12:00.311 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 3458], 99.95th=[ 6390], 00:12:00.311 | 99.99th=[ 6390] 00:12:00.311 bw ( KiB/s): min= 5296, max= 5296, per=16.71%, avg=5296.00, stdev= 0.00, samples=1 00:12:00.311 iops : min= 1324, max= 1324, avg=1324.00, stdev= 0.00, samples=1 00:12:00.311 lat (usec) : 10=0.04%, 250=23.92%, 500=60.45%, 750=15.12%, 1000=0.24% 00:12:00.311 lat (msec) : 2=0.08%, 4=0.12%, 10=0.04% 00:12:00.311 cpu : usr=1.90%, sys=6.80%, ctx=2564, majf=0, minf=17 00:12:00.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.311 issued rwts: total=1024,1522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.311 00:12:00.311 Run status group 0 (all jobs): 00:12:00.311 READ: bw=27.9MiB/s (29.3MB/s), 4092KiB/s-12.0MiB/s (4190kB/s-12.5MB/s), io=28.0MiB (29.3MB), run=1001-1001msec 00:12:00.311 WRITE: bw=30.9MiB/s (32.4MB/s), 6082KiB/s-12.0MiB/s (6228kB/s-12.6MB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:12:00.311 00:12:00.311 Disk stats (read/write): 00:12:00.311 nvme0n1: ios=1213/1536, merge=0/0, ticks=411/369, in_queue=780, util=87.07% 00:12:00.311 nvme0n2: ios=2588/2892, merge=0/0, ticks=449/351, in_queue=800, util=88.21% 00:12:00.311 nvme0n3: ios=1163/1536, merge=0/0, ticks=417/354, in_queue=771, util=89.17% 00:12:00.311 nvme0n4: ios=927/1024, merge=0/0, ticks=432/350, in_queue=782, util=89.21% 00:12:00.311 13:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:00.311 [global] 00:12:00.311 thread=1 00:12:00.311 invalidate=1 00:12:00.311 rw=write 00:12:00.311 time_based=1 00:12:00.311 runtime=1 00:12:00.311 ioengine=libaio 00:12:00.311 direct=1 00:12:00.311 bs=4096 00:12:00.311 iodepth=128 00:12:00.311 norandommap=0 00:12:00.311 numjobs=1 00:12:00.311 00:12:00.311 verify_dump=1 00:12:00.311 verify_backlog=512 00:12:00.311 verify_state_save=0 00:12:00.311 do_verify=1 00:12:00.311 verify=crc32c-intel 00:12:00.311 [job0] 00:12:00.311 filename=/dev/nvme0n1 00:12:00.311 [job1] 00:12:00.311 filename=/dev/nvme0n2 00:12:00.311 [job2] 00:12:00.311 filename=/dev/nvme0n3 00:12:00.311 [job3] 00:12:00.311 filename=/dev/nvme0n4 00:12:00.311 Could not set queue depth (nvme0n1) 00:12:00.311 Could not set queue depth (nvme0n2) 00:12:00.311 Could not set queue depth (nvme0n3) 00:12:00.311 Could not set queue depth (nvme0n4) 00:12:00.311 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:00.311 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:00.311 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:00.311 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:00.311 fio-3.35 00:12:00.311 Starting 4 threads 00:12:01.687 00:12:01.687 job0: (groupid=0, jobs=1): err= 0: pid=66983: Tue Oct 1 13:46:11 2024 00:12:01.687 read: IOPS=5653, BW=22.1MiB/s (23.2MB/s)(22.1MiB/1002msec) 00:12:01.687 slat (usec): min=7, max=2562, avg=81.78, stdev=377.74 00:12:01.687 clat (usec): min=378, max=12071, avg=10932.15, stdev=787.69 00:12:01.687 lat (usec): min=2729, max=12111, avg=11013.93, stdev=692.05 00:12:01.687 clat percentiles (usec): 00:12:01.687 | 1.00th=[ 8586], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:12:01.687 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11076], 00:12:01.687 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11600], 00:12:01.687 | 99.00th=[11731], 99.50th=[11863], 99.90th=[11863], 99.95th=[11994], 00:12:01.687 | 99.99th=[12125] 00:12:01.687 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:12:01.687 slat (usec): min=10, max=2446, avg=79.69, stdev=329.42 00:12:01.687 clat (usec): min=5134, max=11519, avg=10521.47, stdev=611.94 00:12:01.687 lat (usec): min=5154, max=11877, avg=10601.16, stdev=517.91 00:12:01.687 clat percentiles (usec): 00:12:01.687 | 1.00th=[ 8225], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10290], 00:12:01.687 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 00:12:01.687 | 70.00th=[10814], 80.00th=[10814], 90.00th=[11076], 95.00th=[11076], 00:12:01.687 | 99.00th=[11338], 99.50th=[11469], 99.90th=[11469], 99.95th=[11469], 00:12:01.687 | 99.99th=[11469] 00:12:01.687 bw ( KiB/s): min=23816, max=24625, per=36.46%, avg=24220.50, stdev=572.05, samples=2 00:12:01.687 iops : min= 5954, max= 6156, avg=6055.00, stdev=142.84, samples=2 00:12:01.687 lat (usec) : 500=0.01% 00:12:01.687 lat (msec) : 4=0.27%, 10=4.89%, 20=94.83% 00:12:01.687 cpu : usr=5.00%, sys=15.88%, ctx=372, majf=0, minf=12 00:12:01.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:01.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:01.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:01.687 issued rwts: total=5665,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:01.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:01.687 job1: (groupid=0, jobs=1): err= 0: pid=66984: Tue Oct 1 13:46:11 2024 00:12:01.687 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:01.687 slat (usec): min=6, max=10963, avg=197.27, stdev=1031.92 00:12:01.687 clat (usec): min=13009, max=35755, avg=24285.60, stdev=3862.03 00:12:01.687 lat (usec): min=15127, max=35771, avg=24482.86, stdev=3781.84 00:12:01.687 clat percentiles (usec): 00:12:01.687 | 1.00th=[15270], 5.00th=[18220], 10.00th=[19006], 20.00th=[21365], 00:12:01.687 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:12:01.687 | 70.00th=[24773], 80.00th=[25297], 90.00th=[29754], 95.00th=[32375], 00:12:01.687 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:12:01.687 | 99.99th=[35914] 00:12:01.687 write: IOPS=2805, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1004msec); 0 zone resets 00:12:01.687 slat (usec): min=14, max=6593, avg=167.46, stdev=818.73 00:12:01.687 clat (usec): min=312, max=34236, avg=22740.61, stdev=4737.63 00:12:01.687 lat (usec): min=5023, max=34276, avg=22908.07, stdev=4667.44 00:12:01.687 clat percentiles (usec): 00:12:01.687 | 1.00th=[ 5735], 5.00th=[16909], 10.00th=[17171], 20.00th=[17695], 00:12:01.687 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23462], 60.00th=[23462], 00:12:01.687 | 70.00th=[23725], 80.00th=[26084], 90.00th=[29492], 95.00th=[30016], 00:12:01.687 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:12:01.687 | 99.99th=[34341] 00:12:01.687 bw ( KiB/s): min= 9984, max=11528, per=16.19%, avg=10756.00, stdev=1091.77, samples=2 00:12:01.687 iops : min= 2496, max= 2882, avg=2689.00, stdev=272.94, samples=2 00:12:01.687 lat (usec) : 500=0.02% 00:12:01.687 lat (msec) : 10=1.19%, 20=16.89%, 50=81.90% 00:12:01.687 cpu : usr=2.49%, sys=8.77%, ctx=170, majf=0, minf=9 00:12:01.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:01.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:01.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:01.687 issued rwts: total=2560,2817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:01.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:01.687 job2: (groupid=0, jobs=1): err= 0: pid=66985: Tue Oct 1 13:46:11 2024 00:12:01.687 read: IOPS=4672, BW=18.3MiB/s (19.1MB/s)(18.3MiB/1002msec) 00:12:01.687 slat (usec): min=7, max=4107, avg=102.11, stdev=403.01 00:12:01.687 clat (usec): min=699, max=17787, avg=13025.04, stdev=1381.49 00:12:01.687 lat (usec): min=3365, max=17796, avg=13127.15, stdev=1412.85 00:12:01.687 clat percentiles (usec): 00:12:01.687 | 1.00th=[ 8586], 5.00th=[10945], 10.00th=[11600], 20.00th=[12649], 00:12:01.687 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13042], 00:12:01.687 | 70.00th=[13173], 80.00th=[13435], 90.00th=[14746], 95.00th=[15139], 00:12:01.687 | 99.00th=[16319], 99.50th=[16581], 99.90th=[17695], 99.95th=[17695], 00:12:01.687 | 99.99th=[17695] 00:12:01.687 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:12:01.687 slat (usec): min=10, max=4104, avg=93.83, stdev=395.07 00:12:01.687 clat (usec): min=9089, max=17071, avg=12802.86, stdev=1025.74 00:12:01.688 lat (usec): min=9117, max=17088, avg=12896.69, stdev=1083.48 00:12:01.688 clat percentiles (usec): 00:12:01.688 | 1.00th=[10290], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:12:01.688 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:12:01.688 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13698], 95.00th=[15008], 00:12:01.688 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16909], 99.95th=[16909], 00:12:01.688 | 99.99th=[17171] 00:12:01.688 bw ( KiB/s): min=20048, max=20521, per=30.53%, avg=20284.50, stdev=334.46, samples=2 00:12:01.688 iops : min= 5012, max= 5130, avg=5071.00, stdev=83.44, samples=2 00:12:01.688 lat (usec) : 750=0.01% 00:12:01.688 lat (msec) : 4=0.33%, 10=0.78%, 20=98.89% 00:12:01.688 cpu : usr=4.30%, sys=14.89%, ctx=502, majf=0, minf=5 00:12:01.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:01.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:01.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:01.688 issued rwts: total=4682,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:01.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:01.688 job3: (groupid=0, jobs=1): err= 0: pid=66986: Tue Oct 1 13:46:11 2024 00:12:01.688 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:12:01.688 slat (usec): min=7, max=9726, avg=198.00, stdev=1018.56 00:12:01.688 clat (usec): min=13544, max=36258, avg=25867.79, stdev=3462.18 00:12:01.688 lat (usec): min=13565, max=36273, avg=26065.79, stdev=3335.49 00:12:01.688 clat percentiles (usec): 00:12:01.688 | 1.00th=[14222], 5.00th=[23200], 10.00th=[23725], 20.00th=[23987], 00:12:01.688 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:12:01.688 | 70.00th=[26870], 80.00th=[28705], 90.00th=[31327], 95.00th=[32375], 00:12:01.688 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:12:01.688 | 99.99th=[36439] 00:12:01.688 write: IOPS=2585, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1003msec); 0 zone resets 00:12:01.688 slat (usec): min=11, max=9568, avg=181.05, stdev=877.67 00:12:01.688 clat (usec): min=295, max=33878, avg=23138.50, stdev=4128.11 00:12:01.688 lat (usec): min=4602, max=33963, avg=23319.56, stdev=4048.72 00:12:01.688 clat percentiles (usec): 00:12:01.688 | 1.00th=[ 5538], 5.00th=[17695], 10.00th=[17957], 20.00th=[20841], 00:12:01.688 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:12:01.688 | 70.00th=[23725], 80.00th=[23987], 90.00th=[29492], 95.00th=[30802], 00:12:01.688 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:12:01.688 | 99.99th=[33817] 00:12:01.688 bw ( KiB/s): min= 9208, max=11272, per=15.41%, avg=10240.00, stdev=1459.47, samples=2 00:12:01.688 iops : min= 2302, max= 2818, avg=2560.00, stdev=364.87, samples=2 00:12:01.688 lat (usec) : 500=0.02% 00:12:01.688 lat (msec) : 10=0.62%, 20=9.04%, 50=90.32% 00:12:01.688 cpu : usr=3.69%, sys=8.08%, ctx=197, majf=0, minf=13 00:12:01.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:01.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:01.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:01.688 issued rwts: total=2560,2593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:01.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:01.688 00:12:01.688 Run status group 0 (all jobs): 00:12:01.688 READ: bw=60.2MiB/s (63.1MB/s), 9.96MiB/s-22.1MiB/s (10.4MB/s-23.2MB/s), io=60.4MiB (63.4MB), run=1002-1004msec 00:12:01.688 WRITE: bw=64.9MiB/s (68.0MB/s), 10.1MiB/s-24.0MiB/s (10.6MB/s-25.1MB/s), io=65.1MiB (68.3MB), run=1002-1004msec 00:12:01.688 00:12:01.688 Disk stats (read/write): 00:12:01.688 nvme0n1: ios=4881/5120, merge=0/0, ticks=11622/11391, in_queue=23013, util=87.66% 00:12:01.688 nvme0n2: ios=2048/2464, merge=0/0, ticks=12392/12641, in_queue=25033, util=86.91% 00:12:01.688 nvme0n3: ios=4096/4223, merge=0/0, ticks=17053/14728, in_queue=31781, util=88.80% 00:12:01.688 nvme0n4: ios=2048/2240, merge=0/0, ticks=12553/12152, in_queue=24705, util=89.24% 00:12:01.688 13:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:01.688 [global] 00:12:01.688 thread=1 00:12:01.688 invalidate=1 00:12:01.688 rw=randwrite 00:12:01.688 time_based=1 00:12:01.688 runtime=1 00:12:01.688 ioengine=libaio 00:12:01.688 direct=1 00:12:01.688 bs=4096 00:12:01.688 iodepth=128 00:12:01.688 norandommap=0 00:12:01.688 numjobs=1 00:12:01.688 00:12:01.688 verify_dump=1 00:12:01.688 verify_backlog=512 00:12:01.688 verify_state_save=0 00:12:01.688 do_verify=1 00:12:01.688 verify=crc32c-intel 00:12:01.688 [job0] 00:12:01.688 filename=/dev/nvme0n1 00:12:01.688 [job1] 00:12:01.688 filename=/dev/nvme0n2 00:12:01.688 [job2] 00:12:01.688 filename=/dev/nvme0n3 00:12:01.688 [job3] 00:12:01.688 filename=/dev/nvme0n4 00:12:01.688 Could not set queue depth (nvme0n1) 00:12:01.688 Could not set queue depth (nvme0n2) 00:12:01.688 Could not set queue depth (nvme0n3) 00:12:01.688 Could not set queue depth (nvme0n4) 00:12:01.688 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:01.688 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:01.688 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:01.688 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:01.688 fio-3.35 00:12:01.688 Starting 4 threads 00:12:03.062 00:12:03.062 job0: (groupid=0, jobs=1): err= 0: pid=67039: Tue Oct 1 13:46:12 2024 00:12:03.062 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:12:03.062 slat (usec): min=8, max=9807, avg=143.34, stdev=939.23 00:12:03.062 clat (usec): min=11421, max=34838, avg=20066.89, stdev=2510.34 00:12:03.062 lat (usec): min=11436, max=41768, avg=20210.23, stdev=2560.39 00:12:03.062 clat percentiles (usec): 00:12:03.062 | 1.00th=[12125], 5.00th=[17171], 10.00th=[17957], 20.00th=[18744], 00:12:03.062 | 30.00th=[19268], 40.00th=[20055], 50.00th=[20317], 60.00th=[20841], 00:12:03.062 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21627], 95.00th=[21890], 00:12:03.062 | 99.00th=[30802], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:12:03.062 | 99.99th=[34866] 00:12:03.062 write: IOPS=3448, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1002msec); 0 zone resets 00:12:03.062 slat (usec): min=7, max=15208, avg=152.92, stdev=993.64 00:12:03.062 clat (usec): min=1006, max=28420, avg=18895.00, stdev=3067.80 00:12:03.062 lat (usec): min=8609, max=28625, avg=19047.92, stdev=2949.97 00:12:03.062 clat percentiles (usec): 00:12:03.062 | 1.00th=[ 9503], 5.00th=[13698], 10.00th=[15926], 20.00th=[16909], 00:12:03.062 | 30.00th=[17433], 40.00th=[18220], 50.00th=[18744], 60.00th=[19530], 00:12:03.062 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21890], 95.00th=[22676], 00:12:03.062 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:12:03.062 | 99.99th=[28443] 00:12:03.062 bw ( KiB/s): min=12856, max=14784, per=30.69%, avg=13820.00, stdev=1363.30, samples=2 00:12:03.062 iops : min= 3214, max= 3696, avg=3455.00, stdev=340.83, samples=2 00:12:03.062 lat (msec) : 2=0.02%, 10=0.80%, 20=54.74%, 50=44.45% 00:12:03.062 cpu : usr=3.90%, sys=10.09%, ctx=134, majf=0, minf=8 00:12:03.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:03.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:03.062 issued rwts: total=3072,3455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:03.062 job1: (groupid=0, jobs=1): err= 0: pid=67040: Tue Oct 1 13:46:12 2024 00:12:03.062 read: IOPS=1269, BW=5078KiB/s (5199kB/s)(5108KiB/1006msec) 00:12:03.062 slat (usec): min=8, max=41116, avg=365.06, stdev=2541.31 00:12:03.062 clat (usec): min=1888, max=115406, avg=45418.73, stdev=21011.86 00:12:03.062 lat (msec): min=18, max=115, avg=45.78, stdev=21.14 00:12:03.062 clat percentiles (msec): 00:12:03.062 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 34], 00:12:03.062 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:12:03.062 | 70.00th=[ 44], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 96], 00:12:03.062 | 99.00th=[ 114], 99.50th=[ 114], 99.90th=[ 116], 99.95th=[ 116], 00:12:03.062 | 99.99th=[ 116] 00:12:03.062 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:12:03.062 slat (usec): min=6, max=27777, avg=339.14, stdev=1809.34 00:12:03.062 clat (msec): min=10, max=122, avg=45.12, stdev=29.72 00:12:03.062 lat (msec): min=11, max=122, avg=45.46, stdev=29.87 00:12:03.062 clat percentiles (msec): 00:12:03.062 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 20], 00:12:03.062 | 30.00th=[ 27], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:12:03.062 | 70.00th=[ 54], 80.00th=[ 72], 90.00th=[ 95], 95.00th=[ 109], 00:12:03.062 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 123], 99.95th=[ 123], 00:12:03.062 | 99.99th=[ 123] 00:12:03.062 bw ( KiB/s): min= 5840, max= 6460, per=13.66%, avg=6150.00, stdev=438.41, samples=2 00:12:03.062 iops : min= 1460, max= 1615, avg=1537.50, stdev=109.60, samples=2 00:12:03.062 lat (msec) : 2=0.04%, 20=12.37%, 50=57.20%, 100=23.89%, 250=6.51% 00:12:03.062 cpu : usr=2.29%, sys=4.58%, ctx=159, majf=0, minf=17 00:12:03.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:12:03.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:03.062 issued rwts: total=1277,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:03.062 job2: (groupid=0, jobs=1): err= 0: pid=67041: Tue Oct 1 13:46:12 2024 00:12:03.062 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:12:03.062 slat (usec): min=7, max=29906, avg=228.38, stdev=1654.74 00:12:03.062 clat (usec): min=15251, max=84342, avg=32091.84, stdev=11602.67 00:12:03.062 lat (usec): min=15279, max=84379, avg=32320.22, stdev=11699.63 00:12:03.062 clat percentiles (usec): 00:12:03.062 | 1.00th=[16188], 5.00th=[21627], 10.00th=[22152], 20.00th=[23725], 00:12:03.062 | 30.00th=[24773], 40.00th=[25297], 50.00th=[29230], 60.00th=[33424], 00:12:03.062 | 70.00th=[33817], 80.00th=[35914], 90.00th=[47449], 95.00th=[65799], 00:12:03.062 | 99.00th=[67634], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:12:03.062 | 99.99th=[84411] 00:12:03.062 write: IOPS=2548, BW=9.96MiB/s (10.4MB/s)(10.00MiB/1004msec); 0 zone resets 00:12:03.062 slat (usec): min=6, max=29081, avg=200.43, stdev=1450.96 00:12:03.062 clat (usec): min=809, max=49222, avg=24014.25, stdev=7084.26 00:12:03.062 lat (usec): min=8524, max=49249, avg=24214.68, stdev=7015.56 00:12:03.062 clat percentiles (usec): 00:12:03.062 | 1.00th=[ 9110], 5.00th=[15795], 10.00th=[17695], 20.00th=[19268], 00:12:03.062 | 30.00th=[20055], 40.00th=[21103], 50.00th=[21890], 60.00th=[22676], 00:12:03.062 | 70.00th=[26870], 80.00th=[31327], 90.00th=[31851], 95.00th=[34341], 00:12:03.062 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:12:03.062 | 99.99th=[49021] 00:12:03.062 bw ( KiB/s): min= 9216, max=10232, per=21.59%, avg=9724.00, stdev=718.42, samples=2 00:12:03.062 iops : min= 2304, max= 2558, avg=2431.00, stdev=179.61, samples=2 00:12:03.062 lat (usec) : 1000=0.02% 00:12:03.062 lat (msec) : 10=1.26%, 20=16.63%, 50=78.12%, 100=3.97% 00:12:03.062 cpu : usr=2.59%, sys=6.78%, ctx=98, majf=0, minf=5 00:12:03.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:03.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:03.062 issued rwts: total=2048,2559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:03.062 job3: (groupid=0, jobs=1): err= 0: pid=67042: Tue Oct 1 13:46:12 2024 00:12:03.062 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:12:03.062 slat (usec): min=7, max=8550, avg=130.54, stdev=845.11 00:12:03.062 clat (usec): min=8989, max=30298, avg=18122.61, stdev=2387.77 00:12:03.062 lat (usec): min=9007, max=36080, avg=18253.14, stdev=2426.84 00:12:03.062 clat percentiles (usec): 00:12:03.062 | 1.00th=[10814], 5.00th=[14877], 10.00th=[15664], 20.00th=[16909], 00:12:03.062 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:12:03.063 | 70.00th=[18744], 80.00th=[19530], 90.00th=[20317], 95.00th=[21103], 00:12:03.063 | 99.00th=[27395], 99.50th=[27919], 99.90th=[30278], 99.95th=[30278], 00:12:03.063 | 99.99th=[30278] 00:12:03.063 write: IOPS=3759, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1004msec); 0 zone resets 00:12:03.063 slat (usec): min=10, max=14387, avg=132.70, stdev=843.12 00:12:03.063 clat (usec): min=571, max=26137, avg=16481.94, stdev=2494.24 00:12:03.063 lat (usec): min=5951, max=26177, avg=16614.65, stdev=2390.73 00:12:03.063 clat percentiles (usec): 00:12:03.063 | 1.00th=[ 6915], 5.00th=[13435], 10.00th=[14746], 20.00th=[15270], 00:12:03.063 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:12:03.063 | 70.00th=[17171], 80.00th=[18220], 90.00th=[18482], 95.00th=[19006], 00:12:03.063 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:12:03.063 | 99.99th=[26084] 00:12:03.063 bw ( KiB/s): min=13376, max=15895, per=32.50%, avg=14635.50, stdev=1781.20, samples=2 00:12:03.063 iops : min= 3344, max= 3973, avg=3658.50, stdev=444.77, samples=2 00:12:03.063 lat (usec) : 750=0.01% 00:12:03.063 lat (msec) : 10=1.33%, 20=90.43%, 50=8.22% 00:12:03.063 cpu : usr=3.39%, sys=11.76%, ctx=157, majf=0, minf=11 00:12:03.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:03.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:03.063 issued rwts: total=3584,3775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:03.063 00:12:03.063 Run status group 0 (all jobs): 00:12:03.063 READ: bw=38.8MiB/s (40.6MB/s), 5078KiB/s-13.9MiB/s (5199kB/s-14.6MB/s), io=39.0MiB (40.9MB), run=1002-1006msec 00:12:03.063 WRITE: bw=44.0MiB/s (46.1MB/s), 6107KiB/s-14.7MiB/s (6254kB/s-15.4MB/s), io=44.2MiB (46.4MB), run=1002-1006msec 00:12:03.063 00:12:03.063 Disk stats (read/write): 00:12:03.063 nvme0n1: ios=2610/3072, merge=0/0, ticks=49551/54988, in_queue=104539, util=89.78% 00:12:03.063 nvme0n2: ios=1073/1215, merge=0/0, ticks=44215/60829, in_queue=105044, util=89.51% 00:12:03.063 nvme0n3: ios=1957/2048, merge=0/0, ticks=55236/48319, in_queue=103555, util=90.79% 00:12:03.063 nvme0n4: ios=3078/3200, merge=0/0, ticks=53099/50007, in_queue=103106, util=89.92% 00:12:03.063 13:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:03.063 13:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67061 00:12:03.063 13:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:03.063 13:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:03.063 [global] 00:12:03.063 thread=1 00:12:03.063 invalidate=1 00:12:03.063 rw=read 00:12:03.063 time_based=1 00:12:03.063 runtime=10 00:12:03.063 ioengine=libaio 00:12:03.063 direct=1 00:12:03.063 bs=4096 00:12:03.063 iodepth=1 00:12:03.063 norandommap=1 00:12:03.063 numjobs=1 00:12:03.063 00:12:03.063 [job0] 00:12:03.063 filename=/dev/nvme0n1 00:12:03.063 [job1] 00:12:03.063 filename=/dev/nvme0n2 00:12:03.063 [job2] 00:12:03.063 filename=/dev/nvme0n3 00:12:03.063 [job3] 00:12:03.063 filename=/dev/nvme0n4 00:12:03.063 Could not set queue depth (nvme0n1) 00:12:03.063 Could not set queue depth (nvme0n2) 00:12:03.063 Could not set queue depth (nvme0n3) 00:12:03.063 Could not set queue depth (nvme0n4) 00:12:03.063 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.063 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.063 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.063 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.063 fio-3.35 00:12:03.063 Starting 4 threads 00:12:06.410 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:06.410 fio: pid=67104, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:06.410 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=38944768, buflen=4096 00:12:06.410 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:06.410 fio: pid=67103, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:06.410 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=63086592, buflen=4096 00:12:06.410 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:06.410 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:06.669 fio: pid=67101, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:06.669 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7278592, buflen=4096 00:12:06.669 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:06.669 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:06.928 fio: pid=67102, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:06.928 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=55898112, buflen=4096 00:12:07.187 00:12:07.188 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67101: Tue Oct 1 13:46:17 2024 00:12:07.188 read: IOPS=5158, BW=20.1MiB/s (21.1MB/s)(70.9MiB/3521msec) 00:12:07.188 slat (usec): min=7, max=13260, avg=14.44, stdev=150.71 00:12:07.188 clat (usec): min=128, max=2102, avg=178.04, stdev=38.92 00:12:07.188 lat (usec): min=138, max=13434, avg=192.48, stdev=156.41 00:12:07.188 clat percentiles (usec): 00:12:07.188 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 153], 00:12:07.188 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 182], 00:12:07.188 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 225], 00:12:07.188 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 363], 99.95th=[ 668], 00:12:07.188 | 99.99th=[ 1647] 00:12:07.188 bw ( KiB/s): min=19168, max=23656, per=35.24%, avg=20872.00, stdev=1906.28, samples=6 00:12:07.188 iops : min= 4792, max= 5914, avg=5218.00, stdev=476.57, samples=6 00:12:07.188 lat (usec) : 250=99.06%, 500=0.86%, 750=0.02%, 1000=0.02% 00:12:07.188 lat (msec) : 2=0.03%, 4=0.01% 00:12:07.188 cpu : usr=1.65%, sys=5.65%, ctx=18175, majf=0, minf=1 00:12:07.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 issued rwts: total=18162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.188 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67102: Tue Oct 1 13:46:17 2024 00:12:07.188 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(53.3MiB/3831msec) 00:12:07.188 slat (usec): min=7, max=11282, avg=18.57, stdev=187.75 00:12:07.188 clat (usec): min=123, max=3937, avg=260.49, stdev=82.51 00:12:07.188 lat (usec): min=134, max=11464, avg=279.05, stdev=206.11 00:12:07.188 clat percentiles (usec): 00:12:07.188 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 178], 00:12:07.188 | 30.00th=[ 227], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:12:07.188 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 355], 00:12:07.188 | 99.00th=[ 453], 99.50th=[ 494], 99.90th=[ 635], 99.95th=[ 971], 00:12:07.188 | 99.99th=[ 2507] 00:12:07.188 bw ( KiB/s): min=11480, max=18836, per=22.97%, avg=13602.86, stdev=2599.13, samples=7 00:12:07.188 iops : min= 2870, max= 4709, avg=3400.71, stdev=649.78, samples=7 00:12:07.188 lat (usec) : 250=35.55%, 500=64.04%, 750=0.33%, 1000=0.03% 00:12:07.188 lat (msec) : 2=0.02%, 4=0.02% 00:12:07.188 cpu : usr=1.25%, sys=4.49%, ctx=13657, majf=0, minf=1 00:12:07.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 issued rwts: total=13648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.188 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67103: Tue Oct 1 13:46:17 2024 00:12:07.188 read: IOPS=4721, BW=18.4MiB/s (19.3MB/s)(60.2MiB/3262msec) 00:12:07.188 slat (usec): min=10, max=11209, avg=15.67, stdev=114.50 00:12:07.188 clat (usec): min=144, max=2407, avg=194.41, stdev=43.38 00:12:07.188 lat (usec): min=155, max=11452, avg=210.08, stdev=123.20 00:12:07.188 clat percentiles (usec): 00:12:07.188 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:12:07.188 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:12:07.188 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 241], 00:12:07.188 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 490], 99.95th=[ 930], 00:12:07.188 | 99.99th=[ 1762] 00:12:07.188 bw ( KiB/s): min=17104, max=20976, per=31.95%, avg=18922.67, stdev=1480.89, samples=6 00:12:07.188 iops : min= 4276, max= 5244, avg=4730.67, stdev=370.22, samples=6 00:12:07.188 lat (usec) : 250=96.93%, 500=2.97%, 750=0.03%, 1000=0.03% 00:12:07.188 lat (msec) : 2=0.03%, 4=0.01% 00:12:07.188 cpu : usr=1.56%, sys=6.16%, ctx=15406, majf=0, minf=2 00:12:07.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 issued rwts: total=15403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.188 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67104: Tue Oct 1 13:46:17 2024 00:12:07.188 read: IOPS=3206, BW=12.5MiB/s (13.1MB/s)(37.1MiB/2966msec) 00:12:07.188 slat (usec): min=11, max=189, avg=17.04, stdev= 5.70 00:12:07.188 clat (usec): min=141, max=2736, avg=292.69, stdev=55.08 00:12:07.188 lat (usec): min=155, max=2761, avg=309.73, stdev=55.21 00:12:07.188 clat percentiles (usec): 00:12:07.188 | 1.00th=[ 223], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 260], 00:12:07.188 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:12:07.188 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 355], 00:12:07.188 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 652], 99.95th=[ 1156], 00:12:07.188 | 99.99th=[ 2737] 00:12:07.188 bw ( KiB/s): min=11800, max=14256, per=21.32%, avg=12624.00, stdev=1113.83, samples=5 00:12:07.188 iops : min= 2950, max= 3564, avg=3156.00, stdev=278.46, samples=5 00:12:07.188 lat (usec) : 250=11.60%, 500=88.27%, 750=0.04%, 1000=0.02% 00:12:07.188 lat (msec) : 2=0.03%, 4=0.02% 00:12:07.188 cpu : usr=1.38%, sys=4.92%, ctx=9512, majf=0, minf=2 00:12:07.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.188 issued rwts: total=9509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.188 00:12:07.188 Run status group 0 (all jobs): 00:12:07.188 READ: bw=57.8MiB/s (60.6MB/s), 12.5MiB/s-20.1MiB/s (13.1MB/s-21.1MB/s), io=222MiB (232MB), run=2966-3831msec 00:12:07.188 00:12:07.188 Disk stats (read/write): 00:12:07.188 nvme0n1: ios=17468/0, merge=0/0, ticks=3142/0, in_queue=3142, util=95.25% 00:12:07.188 nvme0n2: ios=12430/0, merge=0/0, ticks=3409/0, in_queue=3409, util=95.53% 00:12:07.188 nvme0n3: ios=14640/0, merge=0/0, ticks=2900/0, in_queue=2900, util=96.27% 00:12:07.188 nvme0n4: ios=9169/0, merge=0/0, ticks=2754/0, in_queue=2754, util=96.76% 00:12:07.188 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:07.188 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:07.449 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:07.449 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:07.732 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:07.732 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:07.991 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:07.991 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:08.559 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:08.559 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67061 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.817 nvmf hotplug test: fio failed as expected 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:08.817 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.074 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.074 rmmod nvme_tcp 00:12:09.332 rmmod nvme_fabrics 00:12:09.332 rmmod nvme_keyring 00:12:09.332 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.332 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:09.332 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:09.332 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 66666 ']' 00:12:09.332 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 66666 00:12:09.332 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 66666 ']' 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 66666 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66666 00:12:09.333 killing process with pid 66666 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66666' 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 66666 00:12:09.333 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 66666 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:09.592 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:09.850 00:12:09.850 real 0m21.466s 00:12:09.850 user 1m21.646s 00:12:09.850 sys 0m9.745s 00:12:09.850 ************************************ 00:12:09.850 END TEST nvmf_fio_target 00:12:09.850 ************************************ 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:09.850 ************************************ 00:12:09.850 START TEST nvmf_bdevio 00:12:09.850 ************************************ 00:12:09.850 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:09.850 * Looking for test storage... 00:12:09.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:09.850 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:09.850 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:12:09.850 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:10.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.110 --rc genhtml_branch_coverage=1 00:12:10.110 --rc genhtml_function_coverage=1 00:12:10.110 --rc genhtml_legend=1 00:12:10.110 --rc geninfo_all_blocks=1 00:12:10.110 --rc geninfo_unexecuted_blocks=1 00:12:10.110 00:12:10.110 ' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:10.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.110 --rc genhtml_branch_coverage=1 00:12:10.110 --rc genhtml_function_coverage=1 00:12:10.110 --rc genhtml_legend=1 00:12:10.110 --rc geninfo_all_blocks=1 00:12:10.110 --rc geninfo_unexecuted_blocks=1 00:12:10.110 00:12:10.110 ' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:10.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.110 --rc genhtml_branch_coverage=1 00:12:10.110 --rc genhtml_function_coverage=1 00:12:10.110 --rc genhtml_legend=1 00:12:10.110 --rc geninfo_all_blocks=1 00:12:10.110 --rc geninfo_unexecuted_blocks=1 00:12:10.110 00:12:10.110 ' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:10.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.110 --rc genhtml_branch_coverage=1 00:12:10.110 --rc genhtml_function_coverage=1 00:12:10.110 --rc genhtml_legend=1 00:12:10.110 --rc geninfo_all_blocks=1 00:12:10.110 --rc geninfo_unexecuted_blocks=1 00:12:10.110 00:12:10.110 ' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.110 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.111 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:10.111 Cannot find device "nvmf_init_br" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:10.111 Cannot find device "nvmf_init_br2" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:10.111 Cannot find device "nvmf_tgt_br" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.111 Cannot find device "nvmf_tgt_br2" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:10.111 Cannot find device "nvmf_init_br" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:10.111 Cannot find device "nvmf_init_br2" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:10.111 Cannot find device "nvmf_tgt_br" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:10.111 Cannot find device "nvmf_tgt_br2" 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:10.111 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:10.111 Cannot find device "nvmf_br" 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:10.369 Cannot find device "nvmf_init_if" 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:10.369 Cannot find device "nvmf_init_if2" 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:10.369 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:10.370 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:10.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:10.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:12:10.628 00:12:10.628 --- 10.0.0.3 ping statistics --- 00:12:10.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.628 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:10.628 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:10.628 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:10.629 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:12:10.629 00:12:10.629 --- 10.0.0.4 ping statistics --- 00:12:10.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.629 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:10.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:10.629 00:12:10.629 --- 10.0.0.1 ping statistics --- 00:12:10.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.629 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:10.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:12:10.629 00:12:10.629 --- 10.0.0.2 ping statistics --- 00:12:10.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.629 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:10.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=67438 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 67438 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 67438 ']' 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:10.629 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:10.629 [2024-10-01 13:46:20.685688] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:10.629 [2024-10-01 13:46:20.685800] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.888 [2024-10-01 13:46:20.828305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.888 [2024-10-01 13:46:21.011648] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.888 [2024-10-01 13:46:21.012160] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.888 [2024-10-01 13:46:21.012656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.888 [2024-10-01 13:46:21.013101] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.888 [2024-10-01 13:46:21.013315] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.888 [2024-10-01 13:46:21.013678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:10.888 [2024-10-01 13:46:21.013777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:10.888 [2024-10-01 13:46:21.013955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:10.888 [2024-10-01 13:46:21.013958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.146 [2024-10-01 13:46:21.096154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.714 [2024-10-01 13:46:21.820382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.714 Malloc0 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:11.714 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.715 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 [2024-10-01 13:46:21.892247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:11.973 { 00:12:11.973 "params": { 00:12:11.973 "name": "Nvme$subsystem", 00:12:11.973 "trtype": "$TEST_TRANSPORT", 00:12:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:11.973 "adrfam": "ipv4", 00:12:11.973 "trsvcid": "$NVMF_PORT", 00:12:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:11.973 "hdgst": ${hdgst:-false}, 00:12:11.973 "ddgst": ${ddgst:-false} 00:12:11.973 }, 00:12:11.973 "method": "bdev_nvme_attach_controller" 00:12:11.973 } 00:12:11.973 EOF 00:12:11.973 )") 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:12:11.973 13:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:11.973 "params": { 00:12:11.973 "name": "Nvme1", 00:12:11.973 "trtype": "tcp", 00:12:11.973 "traddr": "10.0.0.3", 00:12:11.973 "adrfam": "ipv4", 00:12:11.973 "trsvcid": "4420", 00:12:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:11.973 "hdgst": false, 00:12:11.973 "ddgst": false 00:12:11.973 }, 00:12:11.973 "method": "bdev_nvme_attach_controller" 00:12:11.973 }' 00:12:11.973 [2024-10-01 13:46:21.980655] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:11.973 [2024-10-01 13:46:21.980799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67476 ] 00:12:11.973 [2024-10-01 13:46:22.135203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.231 [2024-10-01 13:46:22.273239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.232 [2024-10-01 13:46:22.273407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.232 [2024-10-01 13:46:22.273417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.232 [2024-10-01 13:46:22.343465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:12.490 I/O targets: 00:12:12.490 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:12.490 00:12:12.490 00:12:12.490 CUnit - A unit testing framework for C - Version 2.1-3 00:12:12.490 http://cunit.sourceforge.net/ 00:12:12.490 00:12:12.490 00:12:12.490 Suite: bdevio tests on: Nvme1n1 00:12:12.490 Test: blockdev write read block ...passed 00:12:12.490 Test: blockdev write zeroes read block ...passed 00:12:12.490 Test: blockdev write zeroes read no split ...passed 00:12:12.490 Test: blockdev write zeroes read split ...passed 00:12:12.490 Test: blockdev write zeroes read split partial ...passed 00:12:12.490 Test: blockdev reset ...[2024-10-01 13:46:22.503695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:12.490 [2024-10-01 13:46:22.503827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1522040 (9): Bad file descriptor 00:12:12.490 [2024-10-01 13:46:22.515059] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:12.490 passed 00:12:12.490 Test: blockdev write read 8 blocks ...passed 00:12:12.490 Test: blockdev write read size > 128k ...passed 00:12:12.490 Test: blockdev write read invalid size ...passed 00:12:12.490 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.490 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.490 Test: blockdev write read max offset ...passed 00:12:12.490 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.490 Test: blockdev writev readv 8 blocks ...passed 00:12:12.490 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.490 Test: blockdev writev readv block ...passed 00:12:12.490 Test: blockdev writev readv size > 128k ...passed 00:12:12.490 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.490 Test: blockdev comparev and writev ...[2024-10-01 13:46:22.526473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.526538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.526567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.526578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.527100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.527223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.527249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.527261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.527611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.527637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.527655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.527666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.528081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.528115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.528134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.491 [2024-10-01 13:46:22.528144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:12.491 passed 00:12:12.491 Test: blockdev nvme passthru rw ...passed 00:12:12.491 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.491 Test: blockdev nvme admin passthru ...[2024-10-01 13:46:22.529074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.491 [2024-10-01 13:46:22.529124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.529252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.491 [2024-10-01 13:46:22.529277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.529423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.491 [2024-10-01 13:46:22.529448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:12.491 [2024-10-01 13:46:22.529617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.491 [2024-10-01 13:46:22.529641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:12.491 passed 00:12:12.491 Test: blockdev copy ...passed 00:12:12.491 00:12:12.491 Run Summary: Type Total Ran Passed Failed Inactive 00:12:12.491 suites 1 1 n/a 0 0 00:12:12.491 tests 23 23 23 0 0 00:12:12.491 asserts 152 152 152 0 n/a 00:12:12.491 00:12:12.491 Elapsed time = 0.162 seconds 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.750 rmmod nvme_tcp 00:12:12.750 rmmod nvme_fabrics 00:12:12.750 rmmod nvme_keyring 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 67438 ']' 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 67438 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 67438 ']' 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 67438 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.750 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67438 00:12:13.009 killing process with pid 67438 00:12:13.009 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:13.009 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:13.009 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67438' 00:12:13.009 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 67438 00:12:13.009 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 67438 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.268 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:13.269 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:13.527 00:12:13.527 real 0m3.626s 00:12:13.527 user 0m10.657s 00:12:13.527 sys 0m1.076s 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.527 ************************************ 00:12:13.527 END TEST nvmf_bdevio 00:12:13.527 ************************************ 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:13.527 ************************************ 00:12:13.527 END TEST nvmf_target_core 00:12:13.527 ************************************ 00:12:13.527 00:12:13.527 real 2m45.333s 00:12:13.527 user 7m20.930s 00:12:13.527 sys 0m51.100s 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:13.527 13:46:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:13.527 13:46:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:13.527 13:46:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.527 13:46:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.527 ************************************ 00:12:13.527 START TEST nvmf_target_extra 00:12:13.527 ************************************ 00:12:13.527 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:13.786 * Looking for test storage... 00:12:13.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.786 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:13.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.786 --rc genhtml_branch_coverage=1 00:12:13.786 --rc genhtml_function_coverage=1 00:12:13.786 --rc genhtml_legend=1 00:12:13.786 --rc geninfo_all_blocks=1 00:12:13.786 --rc geninfo_unexecuted_blocks=1 00:12:13.786 00:12:13.786 ' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:13.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.787 --rc genhtml_branch_coverage=1 00:12:13.787 --rc genhtml_function_coverage=1 00:12:13.787 --rc genhtml_legend=1 00:12:13.787 --rc geninfo_all_blocks=1 00:12:13.787 --rc geninfo_unexecuted_blocks=1 00:12:13.787 00:12:13.787 ' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:13.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.787 --rc genhtml_branch_coverage=1 00:12:13.787 --rc genhtml_function_coverage=1 00:12:13.787 --rc genhtml_legend=1 00:12:13.787 --rc geninfo_all_blocks=1 00:12:13.787 --rc geninfo_unexecuted_blocks=1 00:12:13.787 00:12:13.787 ' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:13.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.787 --rc genhtml_branch_coverage=1 00:12:13.787 --rc genhtml_function_coverage=1 00:12:13.787 --rc genhtml_legend=1 00:12:13.787 --rc geninfo_all_blocks=1 00:12:13.787 --rc geninfo_unexecuted_blocks=1 00:12:13.787 00:12:13.787 ' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.787 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.787 ************************************ 00:12:13.787 START TEST nvmf_auth_target 00:12:13.787 ************************************ 00:12:13.787 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:13.787 * Looking for test storage... 00:12:14.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.047 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:14.047 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:12:14.047 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.047 --rc genhtml_branch_coverage=1 00:12:14.047 --rc genhtml_function_coverage=1 00:12:14.047 --rc genhtml_legend=1 00:12:14.047 --rc geninfo_all_blocks=1 00:12:14.047 --rc geninfo_unexecuted_blocks=1 00:12:14.047 00:12:14.047 ' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.047 --rc genhtml_branch_coverage=1 00:12:14.047 --rc genhtml_function_coverage=1 00:12:14.047 --rc genhtml_legend=1 00:12:14.047 --rc geninfo_all_blocks=1 00:12:14.047 --rc geninfo_unexecuted_blocks=1 00:12:14.047 00:12:14.047 ' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.047 --rc genhtml_branch_coverage=1 00:12:14.047 --rc genhtml_function_coverage=1 00:12:14.047 --rc genhtml_legend=1 00:12:14.047 --rc geninfo_all_blocks=1 00:12:14.047 --rc geninfo_unexecuted_blocks=1 00:12:14.047 00:12:14.047 ' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.047 --rc genhtml_branch_coverage=1 00:12:14.047 --rc genhtml_function_coverage=1 00:12:14.047 --rc genhtml_legend=1 00:12:14.047 --rc geninfo_all_blocks=1 00:12:14.047 --rc geninfo_unexecuted_blocks=1 00:12:14.047 00:12:14.047 ' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.047 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.048 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:14.048 Cannot find device "nvmf_init_br" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:14.048 Cannot find device "nvmf_init_br2" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:14.048 Cannot find device "nvmf_tgt_br" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.048 Cannot find device "nvmf_tgt_br2" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:14.048 Cannot find device "nvmf_init_br" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:14.048 Cannot find device "nvmf_init_br2" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:14.048 Cannot find device "nvmf_tgt_br" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:14.048 Cannot find device "nvmf_tgt_br2" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:14.048 Cannot find device "nvmf_br" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:14.048 Cannot find device "nvmf_init_if" 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:14.048 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:14.308 Cannot find device "nvmf_init_if2" 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.308 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:14.567 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.567 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.149 ms 00:12:14.567 00:12:14.567 --- 10.0.0.3 ping statistics --- 00:12:14.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.567 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:14.567 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:14.567 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:12:14.567 00:12:14.567 --- 10.0.0.4 ping statistics --- 00:12:14.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.567 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:14.567 00:12:14.567 --- 10.0.0.1 ping statistics --- 00:12:14.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.567 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:14.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:12:14.567 00:12:14.567 --- 10.0.0.2 ping statistics --- 00:12:14.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.567 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=67768 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 67768 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67768 ']' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.567 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.568 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.582 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.582 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:15.582 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:15.582 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.582 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67800 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=240775f30df119797471133c735a7613996cc11526f7c889 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Yvz 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 240775f30df119797471133c735a7613996cc11526f7c889 0 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 240775f30df119797471133c735a7613996cc11526f7c889 0 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=240775f30df119797471133c735a7613996cc11526f7c889 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Yvz 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Yvz 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Yvz 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=240569c33c8b6aac1924e1434a75cd21c3cf6af063c37b1f80cda11ea23659f2 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.oxL 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 240569c33c8b6aac1924e1434a75cd21c3cf6af063c37b1f80cda11ea23659f2 3 00:12:15.841 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 240569c33c8b6aac1924e1434a75cd21c3cf6af063c37b1f80cda11ea23659f2 3 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=240569c33c8b6aac1924e1434a75cd21c3cf6af063c37b1f80cda11ea23659f2 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.oxL 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.oxL 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.oxL 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e1eb26274822cdaa74099fdecdf65a43 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.oLv 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e1eb26274822cdaa74099fdecdf65a43 1 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e1eb26274822cdaa74099fdecdf65a43 1 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e1eb26274822cdaa74099fdecdf65a43 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.oLv 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.oLv 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.oLv 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=130ccf7ffa8746a3ef4d6131f5e0bc953333816658b44196 00:12:15.842 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.KLy 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 130ccf7ffa8746a3ef4d6131f5e0bc953333816658b44196 2 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 130ccf7ffa8746a3ef4d6131f5e0bc953333816658b44196 2 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=130ccf7ffa8746a3ef4d6131f5e0bc953333816658b44196 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:12:15.842 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.KLy 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.KLy 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.KLy 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=274193b9a73d533a6a71e8a9a4b7beb0f4a70c3e5c1f5fc5 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.JGR 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 274193b9a73d533a6a71e8a9a4b7beb0f4a70c3e5c1f5fc5 2 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 274193b9a73d533a6a71e8a9a4b7beb0f4a70c3e5c1f5fc5 2 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=274193b9a73d533a6a71e8a9a4b7beb0f4a70c3e5c1f5fc5 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.JGR 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.JGR 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.JGR 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=fb4efe6f9e0fc89451b9410f8c93ec8d 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Esk 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key fb4efe6f9e0fc89451b9410f8c93ec8d 1 00:12:16.101 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 fb4efe6f9e0fc89451b9410f8c93ec8d 1 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=fb4efe6f9e0fc89451b9410f8c93ec8d 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Esk 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Esk 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Esk 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=31207bde11c9e95bf0aa650c69d77a6a312f39e3db571f9ac84536241d468cb8 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.ufx 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 31207bde11c9e95bf0aa650c69d77a6a312f39e3db571f9ac84536241d468cb8 3 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 31207bde11c9e95bf0aa650c69d77a6a312f39e3db571f9ac84536241d468cb8 3 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=31207bde11c9e95bf0aa650c69d77a6a312f39e3db571f9ac84536241d468cb8 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.ufx 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.ufx 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ufx 00:12:16.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67768 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67768 ']' 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.102 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67800 /var/tmp/host.sock 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67800 ']' 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.714 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yvz 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Yvz 00:12:17.059 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Yvz 00:12:17.332 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.oxL ]] 00:12:17.332 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oxL 00:12:17.332 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.332 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.332 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.332 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oxL 00:12:17.332 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oxL 00:12:17.591 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:17.591 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oLv 00:12:17.591 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.591 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.591 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.591 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.oLv 00:12:17.591 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.oLv 00:12:17.850 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.KLy ]] 00:12:17.850 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KLy 00:12:17.850 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.850 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.850 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.850 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KLy 00:12:17.850 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KLy 00:12:18.109 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:18.109 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JGR 00:12:18.109 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.109 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.109 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.109 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.JGR 00:12:18.109 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.JGR 00:12:18.676 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Esk ]] 00:12:18.676 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Esk 00:12:18.676 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.676 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.676 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.676 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Esk 00:12:18.676 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Esk 00:12:18.950 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:18.950 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ufx 00:12:18.950 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.950 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.950 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.950 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ufx 00:12:18.950 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ufx 00:12:19.226 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:19.226 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:19.226 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.226 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.226 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:19.226 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.493 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.795 00:12:19.795 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.795 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.795 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.054 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.054 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.054 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.054 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.313 { 00:12:20.313 "cntlid": 1, 00:12:20.313 "qid": 0, 00:12:20.313 "state": "enabled", 00:12:20.313 "thread": "nvmf_tgt_poll_group_000", 00:12:20.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:20.313 "listen_address": { 00:12:20.313 "trtype": "TCP", 00:12:20.313 "adrfam": "IPv4", 00:12:20.313 "traddr": "10.0.0.3", 00:12:20.313 "trsvcid": "4420" 00:12:20.313 }, 00:12:20.313 "peer_address": { 00:12:20.313 "trtype": "TCP", 00:12:20.313 "adrfam": "IPv4", 00:12:20.313 "traddr": "10.0.0.1", 00:12:20.313 "trsvcid": "41040" 00:12:20.313 }, 00:12:20.313 "auth": { 00:12:20.313 "state": "completed", 00:12:20.313 "digest": "sha256", 00:12:20.313 "dhgroup": "null" 00:12:20.313 } 00:12:20.313 } 00:12:20.313 ]' 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.313 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.572 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:20.572 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.842 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.100 00:12:26.100 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.100 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.100 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.358 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.358 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.358 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.359 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.359 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.359 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.359 { 00:12:26.359 "cntlid": 3, 00:12:26.359 "qid": 0, 00:12:26.359 "state": "enabled", 00:12:26.359 "thread": "nvmf_tgt_poll_group_000", 00:12:26.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:26.359 "listen_address": { 00:12:26.359 "trtype": "TCP", 00:12:26.359 "adrfam": "IPv4", 00:12:26.359 "traddr": "10.0.0.3", 00:12:26.359 "trsvcid": "4420" 00:12:26.359 }, 00:12:26.359 "peer_address": { 00:12:26.359 "trtype": "TCP", 00:12:26.359 "adrfam": "IPv4", 00:12:26.359 "traddr": "10.0.0.1", 00:12:26.359 "trsvcid": "41070" 00:12:26.359 }, 00:12:26.359 "auth": { 00:12:26.359 "state": "completed", 00:12:26.359 "digest": "sha256", 00:12:26.359 "dhgroup": "null" 00:12:26.359 } 00:12:26.359 } 00:12:26.359 ]' 00:12:26.359 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.359 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.359 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.618 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:26.618 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.618 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.618 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.618 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.880 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:26.880 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:27.815 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.074 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.332 00:12:28.589 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.589 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.589 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.846 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.846 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.846 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.846 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.846 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.846 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.846 { 00:12:28.846 "cntlid": 5, 00:12:28.846 "qid": 0, 00:12:28.846 "state": "enabled", 00:12:28.846 "thread": "nvmf_tgt_poll_group_000", 00:12:28.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:28.847 "listen_address": { 00:12:28.847 "trtype": "TCP", 00:12:28.847 "adrfam": "IPv4", 00:12:28.847 "traddr": "10.0.0.3", 00:12:28.847 "trsvcid": "4420" 00:12:28.847 }, 00:12:28.847 "peer_address": { 00:12:28.847 "trtype": "TCP", 00:12:28.847 "adrfam": "IPv4", 00:12:28.847 "traddr": "10.0.0.1", 00:12:28.847 "trsvcid": "46936" 00:12:28.847 }, 00:12:28.847 "auth": { 00:12:28.847 "state": "completed", 00:12:28.847 "digest": "sha256", 00:12:28.847 "dhgroup": "null" 00:12:28.847 } 00:12:28.847 } 00:12:28.847 ]' 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.847 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.411 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:29.412 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:29.977 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.543 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.801 00:12:30.801 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.801 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.801 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.060 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.060 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.060 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.060 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.060 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.060 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.060 { 00:12:31.060 "cntlid": 7, 00:12:31.060 "qid": 0, 00:12:31.060 "state": "enabled", 00:12:31.060 "thread": "nvmf_tgt_poll_group_000", 00:12:31.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:31.060 "listen_address": { 00:12:31.060 "trtype": "TCP", 00:12:31.060 "adrfam": "IPv4", 00:12:31.060 "traddr": "10.0.0.3", 00:12:31.060 "trsvcid": "4420" 00:12:31.060 }, 00:12:31.060 "peer_address": { 00:12:31.060 "trtype": "TCP", 00:12:31.060 "adrfam": "IPv4", 00:12:31.060 "traddr": "10.0.0.1", 00:12:31.060 "trsvcid": "46960" 00:12:31.060 }, 00:12:31.060 "auth": { 00:12:31.060 "state": "completed", 00:12:31.060 "digest": "sha256", 00:12:31.060 "dhgroup": "null" 00:12:31.060 } 00:12:31.060 } 00:12:31.060 ]' 00:12:31.060 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.319 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.319 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.319 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.319 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.319 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.319 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.319 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.578 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:31.578 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:32.513 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.782 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.065 00:12:33.065 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.065 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.065 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.632 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.632 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.633 { 00:12:33.633 "cntlid": 9, 00:12:33.633 "qid": 0, 00:12:33.633 "state": "enabled", 00:12:33.633 "thread": "nvmf_tgt_poll_group_000", 00:12:33.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:33.633 "listen_address": { 00:12:33.633 "trtype": "TCP", 00:12:33.633 "adrfam": "IPv4", 00:12:33.633 "traddr": "10.0.0.3", 00:12:33.633 "trsvcid": "4420" 00:12:33.633 }, 00:12:33.633 "peer_address": { 00:12:33.633 "trtype": "TCP", 00:12:33.633 "adrfam": "IPv4", 00:12:33.633 "traddr": "10.0.0.1", 00:12:33.633 "trsvcid": "46980" 00:12:33.633 }, 00:12:33.633 "auth": { 00:12:33.633 "state": "completed", 00:12:33.633 "digest": "sha256", 00:12:33.633 "dhgroup": "ffdhe2048" 00:12:33.633 } 00:12:33.633 } 00:12:33.633 ]' 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:33.633 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.891 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.891 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.891 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.150 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:34.150 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:34.715 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.716 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:34.716 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.716 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.973 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.973 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.973 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:34.973 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.231 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.489 00:12:35.757 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.757 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.757 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.017 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.017 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.017 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.017 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.017 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.017 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.017 { 00:12:36.017 "cntlid": 11, 00:12:36.017 "qid": 0, 00:12:36.017 "state": "enabled", 00:12:36.017 "thread": "nvmf_tgt_poll_group_000", 00:12:36.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:36.017 "listen_address": { 00:12:36.017 "trtype": "TCP", 00:12:36.017 "adrfam": "IPv4", 00:12:36.017 "traddr": "10.0.0.3", 00:12:36.017 "trsvcid": "4420" 00:12:36.017 }, 00:12:36.017 "peer_address": { 00:12:36.017 "trtype": "TCP", 00:12:36.017 "adrfam": "IPv4", 00:12:36.017 "traddr": "10.0.0.1", 00:12:36.017 "trsvcid": "47004" 00:12:36.017 }, 00:12:36.017 "auth": { 00:12:36.017 "state": "completed", 00:12:36.017 "digest": "sha256", 00:12:36.017 "dhgroup": "ffdhe2048" 00:12:36.017 } 00:12:36.017 } 00:12:36.017 ]' 00:12:36.017 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.017 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.017 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.017 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.017 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.017 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.017 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.017 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.638 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:36.638 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.202 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.768 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.027 00:12:38.027 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.027 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.027 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.286 { 00:12:38.286 "cntlid": 13, 00:12:38.286 "qid": 0, 00:12:38.286 "state": "enabled", 00:12:38.286 "thread": "nvmf_tgt_poll_group_000", 00:12:38.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:38.286 "listen_address": { 00:12:38.286 "trtype": "TCP", 00:12:38.286 "adrfam": "IPv4", 00:12:38.286 "traddr": "10.0.0.3", 00:12:38.286 "trsvcid": "4420" 00:12:38.286 }, 00:12:38.286 "peer_address": { 00:12:38.286 "trtype": "TCP", 00:12:38.286 "adrfam": "IPv4", 00:12:38.286 "traddr": "10.0.0.1", 00:12:38.286 "trsvcid": "47030" 00:12:38.286 }, 00:12:38.286 "auth": { 00:12:38.286 "state": "completed", 00:12:38.286 "digest": "sha256", 00:12:38.286 "dhgroup": "ffdhe2048" 00:12:38.286 } 00:12:38.286 } 00:12:38.286 ]' 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.286 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.545 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.545 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.545 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.803 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:38.803 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.737 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.304 00:12:40.304 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.304 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.304 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.563 { 00:12:40.563 "cntlid": 15, 00:12:40.563 "qid": 0, 00:12:40.563 "state": "enabled", 00:12:40.563 "thread": "nvmf_tgt_poll_group_000", 00:12:40.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:40.563 "listen_address": { 00:12:40.563 "trtype": "TCP", 00:12:40.563 "adrfam": "IPv4", 00:12:40.563 "traddr": "10.0.0.3", 00:12:40.563 "trsvcid": "4420" 00:12:40.563 }, 00:12:40.563 "peer_address": { 00:12:40.563 "trtype": "TCP", 00:12:40.563 "adrfam": "IPv4", 00:12:40.563 "traddr": "10.0.0.1", 00:12:40.563 "trsvcid": "46070" 00:12:40.563 }, 00:12:40.563 "auth": { 00:12:40.563 "state": "completed", 00:12:40.563 "digest": "sha256", 00:12:40.563 "dhgroup": "ffdhe2048" 00:12:40.563 } 00:12:40.563 } 00:12:40.563 ]' 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.563 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.822 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.822 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.822 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.822 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.822 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.080 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:41.080 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:41.647 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.647 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:41.647 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.647 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.905 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.905 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.905 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.905 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:41.905 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.163 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.421 00:12:42.421 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.421 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.421 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.988 { 00:12:42.988 "cntlid": 17, 00:12:42.988 "qid": 0, 00:12:42.988 "state": "enabled", 00:12:42.988 "thread": "nvmf_tgt_poll_group_000", 00:12:42.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:42.988 "listen_address": { 00:12:42.988 "trtype": "TCP", 00:12:42.988 "adrfam": "IPv4", 00:12:42.988 "traddr": "10.0.0.3", 00:12:42.988 "trsvcid": "4420" 00:12:42.988 }, 00:12:42.988 "peer_address": { 00:12:42.988 "trtype": "TCP", 00:12:42.988 "adrfam": "IPv4", 00:12:42.988 "traddr": "10.0.0.1", 00:12:42.988 "trsvcid": "46108" 00:12:42.988 }, 00:12:42.988 "auth": { 00:12:42.988 "state": "completed", 00:12:42.988 "digest": "sha256", 00:12:42.988 "dhgroup": "ffdhe3072" 00:12:42.988 } 00:12:42.988 } 00:12:42.988 ]' 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:42.988 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.988 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.988 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.988 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.247 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:43.247 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.183 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.441 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.442 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.700 00:12:44.700 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.700 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.700 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.959 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.959 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.959 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.959 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.959 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.959 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.959 { 00:12:44.959 "cntlid": 19, 00:12:44.959 "qid": 0, 00:12:44.959 "state": "enabled", 00:12:44.959 "thread": "nvmf_tgt_poll_group_000", 00:12:44.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:44.959 "listen_address": { 00:12:44.959 "trtype": "TCP", 00:12:44.959 "adrfam": "IPv4", 00:12:44.959 "traddr": "10.0.0.3", 00:12:44.959 "trsvcid": "4420" 00:12:44.959 }, 00:12:44.959 "peer_address": { 00:12:44.959 "trtype": "TCP", 00:12:44.959 "adrfam": "IPv4", 00:12:44.959 "traddr": "10.0.0.1", 00:12:44.959 "trsvcid": "46150" 00:12:44.959 }, 00:12:44.959 "auth": { 00:12:44.959 "state": "completed", 00:12:44.959 "digest": "sha256", 00:12:44.959 "dhgroup": "ffdhe3072" 00:12:44.959 } 00:12:44.959 } 00:12:44.959 ]' 00:12:44.959 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.217 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.217 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.217 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.217 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.217 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.217 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.217 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.489 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:45.489 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.574 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.142 00:12:47.142 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.142 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.142 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.400 { 00:12:47.400 "cntlid": 21, 00:12:47.400 "qid": 0, 00:12:47.400 "state": "enabled", 00:12:47.400 "thread": "nvmf_tgt_poll_group_000", 00:12:47.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:47.400 "listen_address": { 00:12:47.400 "trtype": "TCP", 00:12:47.400 "adrfam": "IPv4", 00:12:47.400 "traddr": "10.0.0.3", 00:12:47.400 "trsvcid": "4420" 00:12:47.400 }, 00:12:47.400 "peer_address": { 00:12:47.400 "trtype": "TCP", 00:12:47.400 "adrfam": "IPv4", 00:12:47.400 "traddr": "10.0.0.1", 00:12:47.400 "trsvcid": "46172" 00:12:47.400 }, 00:12:47.400 "auth": { 00:12:47.400 "state": "completed", 00:12:47.400 "digest": "sha256", 00:12:47.400 "dhgroup": "ffdhe3072" 00:12:47.400 } 00:12:47.400 } 00:12:47.400 ]' 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.400 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.659 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:47.659 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:48.595 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.596 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:48.596 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.596 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.596 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.596 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.596 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:48.596 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.854 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.112 00:12:49.112 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.112 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.112 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.680 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.680 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.680 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.680 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.680 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.680 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.680 { 00:12:49.680 "cntlid": 23, 00:12:49.680 "qid": 0, 00:12:49.680 "state": "enabled", 00:12:49.680 "thread": "nvmf_tgt_poll_group_000", 00:12:49.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:49.680 "listen_address": { 00:12:49.680 "trtype": "TCP", 00:12:49.680 "adrfam": "IPv4", 00:12:49.680 "traddr": "10.0.0.3", 00:12:49.680 "trsvcid": "4420" 00:12:49.680 }, 00:12:49.680 "peer_address": { 00:12:49.680 "trtype": "TCP", 00:12:49.680 "adrfam": "IPv4", 00:12:49.680 "traddr": "10.0.0.1", 00:12:49.680 "trsvcid": "58572" 00:12:49.681 }, 00:12:49.681 "auth": { 00:12:49.681 "state": "completed", 00:12:49.681 "digest": "sha256", 00:12:49.681 "dhgroup": "ffdhe3072" 00:12:49.681 } 00:12:49.681 } 00:12:49.681 ]' 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.681 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.939 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:49.939 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:50.507 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.767 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.334 00:12:51.334 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.334 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.334 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.592 { 00:12:51.592 "cntlid": 25, 00:12:51.592 "qid": 0, 00:12:51.592 "state": "enabled", 00:12:51.592 "thread": "nvmf_tgt_poll_group_000", 00:12:51.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:51.592 "listen_address": { 00:12:51.592 "trtype": "TCP", 00:12:51.592 "adrfam": "IPv4", 00:12:51.592 "traddr": "10.0.0.3", 00:12:51.592 "trsvcid": "4420" 00:12:51.592 }, 00:12:51.592 "peer_address": { 00:12:51.592 "trtype": "TCP", 00:12:51.592 "adrfam": "IPv4", 00:12:51.592 "traddr": "10.0.0.1", 00:12:51.592 "trsvcid": "58594" 00:12:51.592 }, 00:12:51.592 "auth": { 00:12:51.592 "state": "completed", 00:12:51.592 "digest": "sha256", 00:12:51.592 "dhgroup": "ffdhe4096" 00:12:51.592 } 00:12:51.592 } 00:12:51.592 ]' 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.592 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.851 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:51.851 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.851 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.851 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.851 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.109 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:52.109 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:52.676 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.242 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.500 00:12:53.500 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.500 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.500 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.759 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.759 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.759 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.759 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.759 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.759 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.759 { 00:12:53.759 "cntlid": 27, 00:12:53.759 "qid": 0, 00:12:53.759 "state": "enabled", 00:12:53.759 "thread": "nvmf_tgt_poll_group_000", 00:12:53.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:53.759 "listen_address": { 00:12:53.759 "trtype": "TCP", 00:12:53.759 "adrfam": "IPv4", 00:12:53.759 "traddr": "10.0.0.3", 00:12:53.759 "trsvcid": "4420" 00:12:53.759 }, 00:12:53.759 "peer_address": { 00:12:53.759 "trtype": "TCP", 00:12:53.759 "adrfam": "IPv4", 00:12:53.759 "traddr": "10.0.0.1", 00:12:53.759 "trsvcid": "58636" 00:12:53.759 }, 00:12:53.759 "auth": { 00:12:53.759 "state": "completed", 00:12:53.759 "digest": "sha256", 00:12:53.759 "dhgroup": "ffdhe4096" 00:12:53.759 } 00:12:53.759 } 00:12:53.759 ]' 00:12:53.759 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.019 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.019 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.019 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.019 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.019 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.019 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.019 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.277 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:54.277 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:55.214 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.493 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.752 00:12:55.752 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.752 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.752 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.321 { 00:12:56.321 "cntlid": 29, 00:12:56.321 "qid": 0, 00:12:56.321 "state": "enabled", 00:12:56.321 "thread": "nvmf_tgt_poll_group_000", 00:12:56.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:56.321 "listen_address": { 00:12:56.321 "trtype": "TCP", 00:12:56.321 "adrfam": "IPv4", 00:12:56.321 "traddr": "10.0.0.3", 00:12:56.321 "trsvcid": "4420" 00:12:56.321 }, 00:12:56.321 "peer_address": { 00:12:56.321 "trtype": "TCP", 00:12:56.321 "adrfam": "IPv4", 00:12:56.321 "traddr": "10.0.0.1", 00:12:56.321 "trsvcid": "58658" 00:12:56.321 }, 00:12:56.321 "auth": { 00:12:56.321 "state": "completed", 00:12:56.321 "digest": "sha256", 00:12:56.321 "dhgroup": "ffdhe4096" 00:12:56.321 } 00:12:56.321 } 00:12:56.321 ]' 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.321 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.582 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:56.582 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:12:57.519 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.519 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:57.519 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.519 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.519 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.519 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.520 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:57.520 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.779 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.039 00:12:58.039 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.039 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.039 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.605 { 00:12:58.605 "cntlid": 31, 00:12:58.605 "qid": 0, 00:12:58.605 "state": "enabled", 00:12:58.605 "thread": "nvmf_tgt_poll_group_000", 00:12:58.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:12:58.605 "listen_address": { 00:12:58.605 "trtype": "TCP", 00:12:58.605 "adrfam": "IPv4", 00:12:58.605 "traddr": "10.0.0.3", 00:12:58.605 "trsvcid": "4420" 00:12:58.605 }, 00:12:58.605 "peer_address": { 00:12:58.605 "trtype": "TCP", 00:12:58.605 "adrfam": "IPv4", 00:12:58.605 "traddr": "10.0.0.1", 00:12:58.605 "trsvcid": "58694" 00:12:58.605 }, 00:12:58.605 "auth": { 00:12:58.605 "state": "completed", 00:12:58.605 "digest": "sha256", 00:12:58.605 "dhgroup": "ffdhe4096" 00:12:58.605 } 00:12:58.605 } 00:12:58.605 ]' 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.605 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.865 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:58.865 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:59.428 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.994 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.252 00:13:00.510 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.510 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.510 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.768 { 00:13:00.768 "cntlid": 33, 00:13:00.768 "qid": 0, 00:13:00.768 "state": "enabled", 00:13:00.768 "thread": "nvmf_tgt_poll_group_000", 00:13:00.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:00.768 "listen_address": { 00:13:00.768 "trtype": "TCP", 00:13:00.768 "adrfam": "IPv4", 00:13:00.768 "traddr": "10.0.0.3", 00:13:00.768 "trsvcid": "4420" 00:13:00.768 }, 00:13:00.768 "peer_address": { 00:13:00.768 "trtype": "TCP", 00:13:00.768 "adrfam": "IPv4", 00:13:00.768 "traddr": "10.0.0.1", 00:13:00.768 "trsvcid": "47272" 00:13:00.768 }, 00:13:00.768 "auth": { 00:13:00.768 "state": "completed", 00:13:00.768 "digest": "sha256", 00:13:00.768 "dhgroup": "ffdhe6144" 00:13:00.768 } 00:13:00.768 } 00:13:00.768 ]' 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:00.768 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.027 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.027 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.027 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.284 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:01.284 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.272 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.855 00:13:02.855 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.855 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.855 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.113 { 00:13:03.113 "cntlid": 35, 00:13:03.113 "qid": 0, 00:13:03.113 "state": "enabled", 00:13:03.113 "thread": "nvmf_tgt_poll_group_000", 00:13:03.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:03.113 "listen_address": { 00:13:03.113 "trtype": "TCP", 00:13:03.113 "adrfam": "IPv4", 00:13:03.113 "traddr": "10.0.0.3", 00:13:03.113 "trsvcid": "4420" 00:13:03.113 }, 00:13:03.113 "peer_address": { 00:13:03.113 "trtype": "TCP", 00:13:03.113 "adrfam": "IPv4", 00:13:03.113 "traddr": "10.0.0.1", 00:13:03.113 "trsvcid": "47304" 00:13:03.113 }, 00:13:03.113 "auth": { 00:13:03.113 "state": "completed", 00:13:03.113 "digest": "sha256", 00:13:03.113 "dhgroup": "ffdhe6144" 00:13:03.113 } 00:13:03.113 } 00:13:03.113 ]' 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:03.113 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.370 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.370 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.370 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.627 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:03.627 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:04.192 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.450 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:04.450 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.450 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.450 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.450 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.450 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:04.450 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.708 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.275 00:13:05.275 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.275 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.275 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.533 { 00:13:05.533 "cntlid": 37, 00:13:05.533 "qid": 0, 00:13:05.533 "state": "enabled", 00:13:05.533 "thread": "nvmf_tgt_poll_group_000", 00:13:05.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:05.533 "listen_address": { 00:13:05.533 "trtype": "TCP", 00:13:05.533 "adrfam": "IPv4", 00:13:05.533 "traddr": "10.0.0.3", 00:13:05.533 "trsvcid": "4420" 00:13:05.533 }, 00:13:05.533 "peer_address": { 00:13:05.533 "trtype": "TCP", 00:13:05.533 "adrfam": "IPv4", 00:13:05.533 "traddr": "10.0.0.1", 00:13:05.533 "trsvcid": "47344" 00:13:05.533 }, 00:13:05.533 "auth": { 00:13:05.533 "state": "completed", 00:13:05.533 "digest": "sha256", 00:13:05.533 "dhgroup": "ffdhe6144" 00:13:05.533 } 00:13:05.533 } 00:13:05.533 ]' 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:05.533 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.790 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.790 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.790 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.049 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:06.049 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:06.615 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.615 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:06.616 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.616 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.874 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.874 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.874 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:06.874 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.132 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.698 00:13:07.698 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.698 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.698 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.956 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.956 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.956 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.956 { 00:13:07.956 "cntlid": 39, 00:13:07.956 "qid": 0, 00:13:07.956 "state": "enabled", 00:13:07.956 "thread": "nvmf_tgt_poll_group_000", 00:13:07.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:07.956 "listen_address": { 00:13:07.956 "trtype": "TCP", 00:13:07.956 "adrfam": "IPv4", 00:13:07.956 "traddr": "10.0.0.3", 00:13:07.956 "trsvcid": "4420" 00:13:07.956 }, 00:13:07.956 "peer_address": { 00:13:07.956 "trtype": "TCP", 00:13:07.956 "adrfam": "IPv4", 00:13:07.956 "traddr": "10.0.0.1", 00:13:07.956 "trsvcid": "47370" 00:13:07.956 }, 00:13:07.956 "auth": { 00:13:07.956 "state": "completed", 00:13:07.956 "digest": "sha256", 00:13:07.956 "dhgroup": "ffdhe6144" 00:13:07.956 } 00:13:07.956 } 00:13:07.956 ]' 00:13:07.956 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.956 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.956 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.956 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:07.956 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.214 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.214 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.214 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.486 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:08.486 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:09.050 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.050 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:09.050 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.050 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.050 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.051 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.051 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.051 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:09.051 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.308 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.240 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.240 { 00:13:10.240 "cntlid": 41, 00:13:10.240 "qid": 0, 00:13:10.240 "state": "enabled", 00:13:10.240 "thread": "nvmf_tgt_poll_group_000", 00:13:10.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:10.240 "listen_address": { 00:13:10.240 "trtype": "TCP", 00:13:10.240 "adrfam": "IPv4", 00:13:10.240 "traddr": "10.0.0.3", 00:13:10.240 "trsvcid": "4420" 00:13:10.240 }, 00:13:10.240 "peer_address": { 00:13:10.240 "trtype": "TCP", 00:13:10.240 "adrfam": "IPv4", 00:13:10.240 "traddr": "10.0.0.1", 00:13:10.240 "trsvcid": "36800" 00:13:10.240 }, 00:13:10.240 "auth": { 00:13:10.240 "state": "completed", 00:13:10.240 "digest": "sha256", 00:13:10.240 "dhgroup": "ffdhe8192" 00:13:10.240 } 00:13:10.240 } 00:13:10.240 ]' 00:13:10.240 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.499 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.499 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.499 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:10.499 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.499 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.499 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.499 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.757 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:10.757 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:11.722 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.723 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:11.723 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.723 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.723 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.723 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.723 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:11.723 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:11.980 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:11.980 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.980 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:11.980 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.981 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.545 00:13:12.545 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.546 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.546 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.803 { 00:13:12.803 "cntlid": 43, 00:13:12.803 "qid": 0, 00:13:12.803 "state": "enabled", 00:13:12.803 "thread": "nvmf_tgt_poll_group_000", 00:13:12.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:12.803 "listen_address": { 00:13:12.803 "trtype": "TCP", 00:13:12.803 "adrfam": "IPv4", 00:13:12.803 "traddr": "10.0.0.3", 00:13:12.803 "trsvcid": "4420" 00:13:12.803 }, 00:13:12.803 "peer_address": { 00:13:12.803 "trtype": "TCP", 00:13:12.803 "adrfam": "IPv4", 00:13:12.803 "traddr": "10.0.0.1", 00:13:12.803 "trsvcid": "36840" 00:13:12.803 }, 00:13:12.803 "auth": { 00:13:12.803 "state": "completed", 00:13:12.803 "digest": "sha256", 00:13:12.803 "dhgroup": "ffdhe8192" 00:13:12.803 } 00:13:12.803 } 00:13:12.803 ]' 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.803 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.062 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.062 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.062 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.319 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:13.319 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:13.887 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.483 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.101 00:13:15.101 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.101 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.101 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.359 { 00:13:15.359 "cntlid": 45, 00:13:15.359 "qid": 0, 00:13:15.359 "state": "enabled", 00:13:15.359 "thread": "nvmf_tgt_poll_group_000", 00:13:15.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:15.359 "listen_address": { 00:13:15.359 "trtype": "TCP", 00:13:15.359 "adrfam": "IPv4", 00:13:15.359 "traddr": "10.0.0.3", 00:13:15.359 "trsvcid": "4420" 00:13:15.359 }, 00:13:15.359 "peer_address": { 00:13:15.359 "trtype": "TCP", 00:13:15.359 "adrfam": "IPv4", 00:13:15.359 "traddr": "10.0.0.1", 00:13:15.359 "trsvcid": "36870" 00:13:15.359 }, 00:13:15.359 "auth": { 00:13:15.359 "state": "completed", 00:13:15.359 "digest": "sha256", 00:13:15.359 "dhgroup": "ffdhe8192" 00:13:15.359 } 00:13:15.359 } 00:13:15.359 ]' 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.359 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.931 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:15.931 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:16.496 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.496 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:16.496 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.496 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.496 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.497 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:16.497 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.754 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:16.755 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.755 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.688 00:13:17.688 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.688 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.688 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.947 { 00:13:17.947 "cntlid": 47, 00:13:17.947 "qid": 0, 00:13:17.947 "state": "enabled", 00:13:17.947 "thread": "nvmf_tgt_poll_group_000", 00:13:17.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:17.947 "listen_address": { 00:13:17.947 "trtype": "TCP", 00:13:17.947 "adrfam": "IPv4", 00:13:17.947 "traddr": "10.0.0.3", 00:13:17.947 "trsvcid": "4420" 00:13:17.947 }, 00:13:17.947 "peer_address": { 00:13:17.947 "trtype": "TCP", 00:13:17.947 "adrfam": "IPv4", 00:13:17.947 "traddr": "10.0.0.1", 00:13:17.947 "trsvcid": "36890" 00:13:17.947 }, 00:13:17.947 "auth": { 00:13:17.947 "state": "completed", 00:13:17.947 "digest": "sha256", 00:13:17.947 "dhgroup": "ffdhe8192" 00:13:17.947 } 00:13:17.947 } 00:13:17.947 ]' 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.947 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.947 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.947 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.947 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.947 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.947 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.514 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:18.514 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:19.079 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.338 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.906 00:13:19.906 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.906 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.906 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.165 { 00:13:20.165 "cntlid": 49, 00:13:20.165 "qid": 0, 00:13:20.165 "state": "enabled", 00:13:20.165 "thread": "nvmf_tgt_poll_group_000", 00:13:20.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:20.165 "listen_address": { 00:13:20.165 "trtype": "TCP", 00:13:20.165 "adrfam": "IPv4", 00:13:20.165 "traddr": "10.0.0.3", 00:13:20.165 "trsvcid": "4420" 00:13:20.165 }, 00:13:20.165 "peer_address": { 00:13:20.165 "trtype": "TCP", 00:13:20.165 "adrfam": "IPv4", 00:13:20.165 "traddr": "10.0.0.1", 00:13:20.165 "trsvcid": "55216" 00:13:20.165 }, 00:13:20.165 "auth": { 00:13:20.165 "state": "completed", 00:13:20.165 "digest": "sha384", 00:13:20.165 "dhgroup": "null" 00:13:20.165 } 00:13:20.165 } 00:13:20.165 ]' 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:20.165 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.423 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.423 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.423 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.681 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:20.681 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:21.338 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.597 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.164 00:13:22.164 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.164 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.164 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.422 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.422 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.422 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.422 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.422 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.422 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.422 { 00:13:22.422 "cntlid": 51, 00:13:22.422 "qid": 0, 00:13:22.422 "state": "enabled", 00:13:22.422 "thread": "nvmf_tgt_poll_group_000", 00:13:22.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:22.422 "listen_address": { 00:13:22.422 "trtype": "TCP", 00:13:22.422 "adrfam": "IPv4", 00:13:22.422 "traddr": "10.0.0.3", 00:13:22.422 "trsvcid": "4420" 00:13:22.422 }, 00:13:22.422 "peer_address": { 00:13:22.422 "trtype": "TCP", 00:13:22.422 "adrfam": "IPv4", 00:13:22.422 "traddr": "10.0.0.1", 00:13:22.422 "trsvcid": "55254" 00:13:22.422 }, 00:13:22.422 "auth": { 00:13:22.422 "state": "completed", 00:13:22.423 "digest": "sha384", 00:13:22.423 "dhgroup": "null" 00:13:22.423 } 00:13:22.423 } 00:13:22.423 ]' 00:13:22.423 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.423 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.423 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.681 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:22.681 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.681 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.681 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.681 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.939 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:22.939 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:23.874 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.132 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.390 00:13:24.390 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.390 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.390 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.649 { 00:13:24.649 "cntlid": 53, 00:13:24.649 "qid": 0, 00:13:24.649 "state": "enabled", 00:13:24.649 "thread": "nvmf_tgt_poll_group_000", 00:13:24.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:24.649 "listen_address": { 00:13:24.649 "trtype": "TCP", 00:13:24.649 "adrfam": "IPv4", 00:13:24.649 "traddr": "10.0.0.3", 00:13:24.649 "trsvcid": "4420" 00:13:24.649 }, 00:13:24.649 "peer_address": { 00:13:24.649 "trtype": "TCP", 00:13:24.649 "adrfam": "IPv4", 00:13:24.649 "traddr": "10.0.0.1", 00:13:24.649 "trsvcid": "55288" 00:13:24.649 }, 00:13:24.649 "auth": { 00:13:24.649 "state": "completed", 00:13:24.649 "digest": "sha384", 00:13:24.649 "dhgroup": "null" 00:13:24.649 } 00:13:24.649 } 00:13:24.649 ]' 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.649 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.910 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:24.910 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.910 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.910 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.910 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.177 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:25.177 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:25.744 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.312 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.571 00:13:26.571 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.571 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.571 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.829 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.829 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.829 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.829 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.829 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.829 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.829 { 00:13:26.829 "cntlid": 55, 00:13:26.829 "qid": 0, 00:13:26.829 "state": "enabled", 00:13:26.829 "thread": "nvmf_tgt_poll_group_000", 00:13:26.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:26.829 "listen_address": { 00:13:26.829 "trtype": "TCP", 00:13:26.829 "adrfam": "IPv4", 00:13:26.829 "traddr": "10.0.0.3", 00:13:26.829 "trsvcid": "4420" 00:13:26.829 }, 00:13:26.829 "peer_address": { 00:13:26.829 "trtype": "TCP", 00:13:26.829 "adrfam": "IPv4", 00:13:26.829 "traddr": "10.0.0.1", 00:13:26.829 "trsvcid": "55308" 00:13:26.829 }, 00:13:26.829 "auth": { 00:13:26.829 "state": "completed", 00:13:26.829 "digest": "sha384", 00:13:26.829 "dhgroup": "null" 00:13:26.829 } 00:13:26.829 } 00:13:26.829 ]' 00:13:26.829 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.830 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.830 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.830 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:26.830 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.830 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.830 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.830 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.395 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:27.395 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.962 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.221 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.797 00:13:28.797 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.797 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.797 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.118 { 00:13:29.118 "cntlid": 57, 00:13:29.118 "qid": 0, 00:13:29.118 "state": "enabled", 00:13:29.118 "thread": "nvmf_tgt_poll_group_000", 00:13:29.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:29.118 "listen_address": { 00:13:29.118 "trtype": "TCP", 00:13:29.118 "adrfam": "IPv4", 00:13:29.118 "traddr": "10.0.0.3", 00:13:29.118 "trsvcid": "4420" 00:13:29.118 }, 00:13:29.118 "peer_address": { 00:13:29.118 "trtype": "TCP", 00:13:29.118 "adrfam": "IPv4", 00:13:29.118 "traddr": "10.0.0.1", 00:13:29.118 "trsvcid": "51848" 00:13:29.118 }, 00:13:29.118 "auth": { 00:13:29.118 "state": "completed", 00:13:29.118 "digest": "sha384", 00:13:29.118 "dhgroup": "ffdhe2048" 00:13:29.118 } 00:13:29.118 } 00:13:29.118 ]' 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.118 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.375 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:29.375 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:30.311 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.569 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.828 00:13:30.828 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.828 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.828 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.394 { 00:13:31.394 "cntlid": 59, 00:13:31.394 "qid": 0, 00:13:31.394 "state": "enabled", 00:13:31.394 "thread": "nvmf_tgt_poll_group_000", 00:13:31.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:31.394 "listen_address": { 00:13:31.394 "trtype": "TCP", 00:13:31.394 "adrfam": "IPv4", 00:13:31.394 "traddr": "10.0.0.3", 00:13:31.394 "trsvcid": "4420" 00:13:31.394 }, 00:13:31.394 "peer_address": { 00:13:31.394 "trtype": "TCP", 00:13:31.394 "adrfam": "IPv4", 00:13:31.394 "traddr": "10.0.0.1", 00:13:31.394 "trsvcid": "51880" 00:13:31.394 }, 00:13:31.394 "auth": { 00:13:31.394 "state": "completed", 00:13:31.394 "digest": "sha384", 00:13:31.394 "dhgroup": "ffdhe2048" 00:13:31.394 } 00:13:31.394 } 00:13:31.394 ]' 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.394 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.652 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:31.652 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.584 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.585 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.585 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.585 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.585 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.585 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.585 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.182 00:13:33.182 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.182 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.182 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.440 { 00:13:33.440 "cntlid": 61, 00:13:33.440 "qid": 0, 00:13:33.440 "state": "enabled", 00:13:33.440 "thread": "nvmf_tgt_poll_group_000", 00:13:33.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:33.440 "listen_address": { 00:13:33.440 "trtype": "TCP", 00:13:33.440 "adrfam": "IPv4", 00:13:33.440 "traddr": "10.0.0.3", 00:13:33.440 "trsvcid": "4420" 00:13:33.440 }, 00:13:33.440 "peer_address": { 00:13:33.440 "trtype": "TCP", 00:13:33.440 "adrfam": "IPv4", 00:13:33.440 "traddr": "10.0.0.1", 00:13:33.440 "trsvcid": "51908" 00:13:33.440 }, 00:13:33.440 "auth": { 00:13:33.440 "state": "completed", 00:13:33.440 "digest": "sha384", 00:13:33.440 "dhgroup": "ffdhe2048" 00:13:33.440 } 00:13:33.440 } 00:13:33.440 ]' 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.440 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.699 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:33.699 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:34.635 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.636 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.204 00:13:35.204 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.204 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.204 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.462 { 00:13:35.462 "cntlid": 63, 00:13:35.462 "qid": 0, 00:13:35.462 "state": "enabled", 00:13:35.462 "thread": "nvmf_tgt_poll_group_000", 00:13:35.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:35.462 "listen_address": { 00:13:35.462 "trtype": "TCP", 00:13:35.462 "adrfam": "IPv4", 00:13:35.462 "traddr": "10.0.0.3", 00:13:35.462 "trsvcid": "4420" 00:13:35.462 }, 00:13:35.462 "peer_address": { 00:13:35.462 "trtype": "TCP", 00:13:35.462 "adrfam": "IPv4", 00:13:35.462 "traddr": "10.0.0.1", 00:13:35.462 "trsvcid": "51946" 00:13:35.462 }, 00:13:35.462 "auth": { 00:13:35.462 "state": "completed", 00:13:35.462 "digest": "sha384", 00:13:35.462 "dhgroup": "ffdhe2048" 00:13:35.462 } 00:13:35.462 } 00:13:35.462 ]' 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.462 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.463 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.463 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.463 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.721 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:35.721 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.691 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.949 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.950 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.950 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.208 00:13:37.208 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.208 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.208 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.774 { 00:13:37.774 "cntlid": 65, 00:13:37.774 "qid": 0, 00:13:37.774 "state": "enabled", 00:13:37.774 "thread": "nvmf_tgt_poll_group_000", 00:13:37.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:37.774 "listen_address": { 00:13:37.774 "trtype": "TCP", 00:13:37.774 "adrfam": "IPv4", 00:13:37.774 "traddr": "10.0.0.3", 00:13:37.774 "trsvcid": "4420" 00:13:37.774 }, 00:13:37.774 "peer_address": { 00:13:37.774 "trtype": "TCP", 00:13:37.774 "adrfam": "IPv4", 00:13:37.774 "traddr": "10.0.0.1", 00:13:37.774 "trsvcid": "51994" 00:13:37.774 }, 00:13:37.774 "auth": { 00:13:37.774 "state": "completed", 00:13:37.774 "digest": "sha384", 00:13:37.774 "dhgroup": "ffdhe3072" 00:13:37.774 } 00:13:37.774 } 00:13:37.774 ]' 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.774 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.341 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:38.341 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:38.906 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.906 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:38.906 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.906 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.906 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.906 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.906 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:38.906 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.165 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.731 00:13:39.731 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.731 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.732 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.990 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.990 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.990 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.990 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.990 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.990 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.990 { 00:13:39.990 "cntlid": 67, 00:13:39.990 "qid": 0, 00:13:39.990 "state": "enabled", 00:13:39.990 "thread": "nvmf_tgt_poll_group_000", 00:13:39.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:39.990 "listen_address": { 00:13:39.990 "trtype": "TCP", 00:13:39.990 "adrfam": "IPv4", 00:13:39.990 "traddr": "10.0.0.3", 00:13:39.990 "trsvcid": "4420" 00:13:39.990 }, 00:13:39.990 "peer_address": { 00:13:39.990 "trtype": "TCP", 00:13:39.990 "adrfam": "IPv4", 00:13:39.990 "traddr": "10.0.0.1", 00:13:39.990 "trsvcid": "49526" 00:13:39.990 }, 00:13:39.990 "auth": { 00:13:39.990 "state": "completed", 00:13:39.990 "digest": "sha384", 00:13:39.990 "dhgroup": "ffdhe3072" 00:13:39.990 } 00:13:39.990 } 00:13:39.990 ]' 00:13:39.990 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.990 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.990 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.990 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.990 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.990 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.990 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.990 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.248 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:40.248 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.181 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.439 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.440 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.440 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.704 00:13:41.704 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.704 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.704 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.976 { 00:13:41.976 "cntlid": 69, 00:13:41.976 "qid": 0, 00:13:41.976 "state": "enabled", 00:13:41.976 "thread": "nvmf_tgt_poll_group_000", 00:13:41.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:41.976 "listen_address": { 00:13:41.976 "trtype": "TCP", 00:13:41.976 "adrfam": "IPv4", 00:13:41.976 "traddr": "10.0.0.3", 00:13:41.976 "trsvcid": "4420" 00:13:41.976 }, 00:13:41.976 "peer_address": { 00:13:41.976 "trtype": "TCP", 00:13:41.976 "adrfam": "IPv4", 00:13:41.976 "traddr": "10.0.0.1", 00:13:41.976 "trsvcid": "49560" 00:13:41.976 }, 00:13:41.976 "auth": { 00:13:41.976 "state": "completed", 00:13:41.976 "digest": "sha384", 00:13:41.976 "dhgroup": "ffdhe3072" 00:13:41.976 } 00:13:41.976 } 00:13:41.976 ]' 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.976 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.234 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:42.234 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.234 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.234 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.234 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.493 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:42.493 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.427 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.994 00:13:43.994 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.994 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.994 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.252 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.252 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.252 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.252 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.252 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.252 { 00:13:44.252 "cntlid": 71, 00:13:44.252 "qid": 0, 00:13:44.252 "state": "enabled", 00:13:44.252 "thread": "nvmf_tgt_poll_group_000", 00:13:44.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:44.252 "listen_address": { 00:13:44.252 "trtype": "TCP", 00:13:44.253 "adrfam": "IPv4", 00:13:44.253 "traddr": "10.0.0.3", 00:13:44.253 "trsvcid": "4420" 00:13:44.253 }, 00:13:44.253 "peer_address": { 00:13:44.253 "trtype": "TCP", 00:13:44.253 "adrfam": "IPv4", 00:13:44.253 "traddr": "10.0.0.1", 00:13:44.253 "trsvcid": "49588" 00:13:44.253 }, 00:13:44.253 "auth": { 00:13:44.253 "state": "completed", 00:13:44.253 "digest": "sha384", 00:13:44.253 "dhgroup": "ffdhe3072" 00:13:44.253 } 00:13:44.253 } 00:13:44.253 ]' 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.253 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.515 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:44.515 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.463 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.030 00:13:46.030 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.030 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.030 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.289 { 00:13:46.289 "cntlid": 73, 00:13:46.289 "qid": 0, 00:13:46.289 "state": "enabled", 00:13:46.289 "thread": "nvmf_tgt_poll_group_000", 00:13:46.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:46.289 "listen_address": { 00:13:46.289 "trtype": "TCP", 00:13:46.289 "adrfam": "IPv4", 00:13:46.289 "traddr": "10.0.0.3", 00:13:46.289 "trsvcid": "4420" 00:13:46.289 }, 00:13:46.289 "peer_address": { 00:13:46.289 "trtype": "TCP", 00:13:46.289 "adrfam": "IPv4", 00:13:46.289 "traddr": "10.0.0.1", 00:13:46.289 "trsvcid": "49614" 00:13:46.289 }, 00:13:46.289 "auth": { 00:13:46.289 "state": "completed", 00:13:46.289 "digest": "sha384", 00:13:46.289 "dhgroup": "ffdhe4096" 00:13:46.289 } 00:13:46.289 } 00:13:46.289 ]' 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.289 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.547 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.547 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.547 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.804 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:46.804 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.777 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.046 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.046 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.046 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.046 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.304 00:13:48.304 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.304 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.304 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.562 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.562 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.562 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.562 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.562 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.562 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.562 { 00:13:48.562 "cntlid": 75, 00:13:48.562 "qid": 0, 00:13:48.562 "state": "enabled", 00:13:48.562 "thread": "nvmf_tgt_poll_group_000", 00:13:48.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:48.562 "listen_address": { 00:13:48.562 "trtype": "TCP", 00:13:48.562 "adrfam": "IPv4", 00:13:48.562 "traddr": "10.0.0.3", 00:13:48.562 "trsvcid": "4420" 00:13:48.562 }, 00:13:48.562 "peer_address": { 00:13:48.562 "trtype": "TCP", 00:13:48.562 "adrfam": "IPv4", 00:13:48.562 "traddr": "10.0.0.1", 00:13:48.562 "trsvcid": "49642" 00:13:48.562 }, 00:13:48.562 "auth": { 00:13:48.562 "state": "completed", 00:13:48.562 "digest": "sha384", 00:13:48.562 "dhgroup": "ffdhe4096" 00:13:48.562 } 00:13:48.562 } 00:13:48.562 ]' 00:13:48.563 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.820 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.820 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.820 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.820 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.820 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.820 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.820 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.077 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:49.077 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:50.011 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.270 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.528 00:13:50.528 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.528 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.528 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.121 { 00:13:51.121 "cntlid": 77, 00:13:51.121 "qid": 0, 00:13:51.121 "state": "enabled", 00:13:51.121 "thread": "nvmf_tgt_poll_group_000", 00:13:51.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:51.121 "listen_address": { 00:13:51.121 "trtype": "TCP", 00:13:51.121 "adrfam": "IPv4", 00:13:51.121 "traddr": "10.0.0.3", 00:13:51.121 "trsvcid": "4420" 00:13:51.121 }, 00:13:51.121 "peer_address": { 00:13:51.121 "trtype": "TCP", 00:13:51.121 "adrfam": "IPv4", 00:13:51.121 "traddr": "10.0.0.1", 00:13:51.121 "trsvcid": "35326" 00:13:51.121 }, 00:13:51.121 "auth": { 00:13:51.121 "state": "completed", 00:13:51.121 "digest": "sha384", 00:13:51.121 "dhgroup": "ffdhe4096" 00:13:51.121 } 00:13:51.121 } 00:13:51.121 ]' 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.121 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.379 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:51.379 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:52.313 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.571 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.137 00:13:53.137 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.137 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.137 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.394 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.394 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.394 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.394 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.394 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.394 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.394 { 00:13:53.394 "cntlid": 79, 00:13:53.394 "qid": 0, 00:13:53.394 "state": "enabled", 00:13:53.394 "thread": "nvmf_tgt_poll_group_000", 00:13:53.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:53.394 "listen_address": { 00:13:53.394 "trtype": "TCP", 00:13:53.394 "adrfam": "IPv4", 00:13:53.394 "traddr": "10.0.0.3", 00:13:53.394 "trsvcid": "4420" 00:13:53.394 }, 00:13:53.394 "peer_address": { 00:13:53.394 "trtype": "TCP", 00:13:53.394 "adrfam": "IPv4", 00:13:53.394 "traddr": "10.0.0.1", 00:13:53.394 "trsvcid": "35364" 00:13:53.394 }, 00:13:53.394 "auth": { 00:13:53.394 "state": "completed", 00:13:53.394 "digest": "sha384", 00:13:53.394 "dhgroup": "ffdhe4096" 00:13:53.394 } 00:13:53.394 } 00:13:53.395 ]' 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.395 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.709 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:53.709 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:54.643 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.902 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.469 00:13:55.469 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.469 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.469 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.729 { 00:13:55.729 "cntlid": 81, 00:13:55.729 "qid": 0, 00:13:55.729 "state": "enabled", 00:13:55.729 "thread": "nvmf_tgt_poll_group_000", 00:13:55.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:55.729 "listen_address": { 00:13:55.729 "trtype": "TCP", 00:13:55.729 "adrfam": "IPv4", 00:13:55.729 "traddr": "10.0.0.3", 00:13:55.729 "trsvcid": "4420" 00:13:55.729 }, 00:13:55.729 "peer_address": { 00:13:55.729 "trtype": "TCP", 00:13:55.729 "adrfam": "IPv4", 00:13:55.729 "traddr": "10.0.0.1", 00:13:55.729 "trsvcid": "35396" 00:13:55.729 }, 00:13:55.729 "auth": { 00:13:55.729 "state": "completed", 00:13:55.729 "digest": "sha384", 00:13:55.729 "dhgroup": "ffdhe6144" 00:13:55.729 } 00:13:55.729 } 00:13:55.729 ]' 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.988 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.988 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.988 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.988 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.988 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.247 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:56.247 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.217 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.785 00:13:57.785 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.785 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.785 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.352 { 00:13:58.352 "cntlid": 83, 00:13:58.352 "qid": 0, 00:13:58.352 "state": "enabled", 00:13:58.352 "thread": "nvmf_tgt_poll_group_000", 00:13:58.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:13:58.352 "listen_address": { 00:13:58.352 "trtype": "TCP", 00:13:58.352 "adrfam": "IPv4", 00:13:58.352 "traddr": "10.0.0.3", 00:13:58.352 "trsvcid": "4420" 00:13:58.352 }, 00:13:58.352 "peer_address": { 00:13:58.352 "trtype": "TCP", 00:13:58.352 "adrfam": "IPv4", 00:13:58.352 "traddr": "10.0.0.1", 00:13:58.352 "trsvcid": "35426" 00:13:58.352 }, 00:13:58.352 "auth": { 00:13:58.352 "state": "completed", 00:13:58.352 "digest": "sha384", 00:13:58.352 "dhgroup": "ffdhe6144" 00:13:58.352 } 00:13:58.352 } 00:13:58.352 ]' 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.352 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.611 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:58.611 13:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.546 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.115 00:14:00.115 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.115 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.115 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.372 { 00:14:00.372 "cntlid": 85, 00:14:00.372 "qid": 0, 00:14:00.372 "state": "enabled", 00:14:00.372 "thread": "nvmf_tgt_poll_group_000", 00:14:00.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:00.372 "listen_address": { 00:14:00.372 "trtype": "TCP", 00:14:00.372 "adrfam": "IPv4", 00:14:00.372 "traddr": "10.0.0.3", 00:14:00.372 "trsvcid": "4420" 00:14:00.372 }, 00:14:00.372 "peer_address": { 00:14:00.372 "trtype": "TCP", 00:14:00.372 "adrfam": "IPv4", 00:14:00.372 "traddr": "10.0.0.1", 00:14:00.372 "trsvcid": "32880" 00:14:00.372 }, 00:14:00.372 "auth": { 00:14:00.372 "state": "completed", 00:14:00.372 "digest": "sha384", 00:14:00.372 "dhgroup": "ffdhe6144" 00:14:00.372 } 00:14:00.372 } 00:14:00.372 ]' 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.372 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.630 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:00.630 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.630 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.630 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.630 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.887 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:00.887 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:01.450 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.016 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.274 00:14:02.274 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.274 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.274 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.839 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.839 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.839 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.839 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.839 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.839 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.839 { 00:14:02.839 "cntlid": 87, 00:14:02.839 "qid": 0, 00:14:02.839 "state": "enabled", 00:14:02.839 "thread": "nvmf_tgt_poll_group_000", 00:14:02.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:02.840 "listen_address": { 00:14:02.840 "trtype": "TCP", 00:14:02.840 "adrfam": "IPv4", 00:14:02.840 "traddr": "10.0.0.3", 00:14:02.840 "trsvcid": "4420" 00:14:02.840 }, 00:14:02.840 "peer_address": { 00:14:02.840 "trtype": "TCP", 00:14:02.840 "adrfam": "IPv4", 00:14:02.840 "traddr": "10.0.0.1", 00:14:02.840 "trsvcid": "32902" 00:14:02.840 }, 00:14:02.840 "auth": { 00:14:02.840 "state": "completed", 00:14:02.840 "digest": "sha384", 00:14:02.840 "dhgroup": "ffdhe6144" 00:14:02.840 } 00:14:02.840 } 00:14:02.840 ]' 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.840 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.097 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:03.097 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:04.031 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.031 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.289 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.289 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.289 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.289 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.221 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.221 { 00:14:05.221 "cntlid": 89, 00:14:05.221 "qid": 0, 00:14:05.221 "state": "enabled", 00:14:05.221 "thread": "nvmf_tgt_poll_group_000", 00:14:05.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:05.221 "listen_address": { 00:14:05.221 "trtype": "TCP", 00:14:05.221 "adrfam": "IPv4", 00:14:05.221 "traddr": "10.0.0.3", 00:14:05.221 "trsvcid": "4420" 00:14:05.221 }, 00:14:05.221 "peer_address": { 00:14:05.221 "trtype": "TCP", 00:14:05.221 "adrfam": "IPv4", 00:14:05.221 "traddr": "10.0.0.1", 00:14:05.221 "trsvcid": "32926" 00:14:05.221 }, 00:14:05.221 "auth": { 00:14:05.221 "state": "completed", 00:14:05.221 "digest": "sha384", 00:14:05.221 "dhgroup": "ffdhe8192" 00:14:05.221 } 00:14:05.221 } 00:14:05.221 ]' 00:14:05.221 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.480 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:05.480 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.480 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.480 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.480 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.480 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.480 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.738 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:05.739 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.687 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.620 00:14:07.620 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.620 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.620 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.877 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.877 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.877 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.877 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.877 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.877 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.877 { 00:14:07.877 "cntlid": 91, 00:14:07.877 "qid": 0, 00:14:07.877 "state": "enabled", 00:14:07.877 "thread": "nvmf_tgt_poll_group_000", 00:14:07.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:07.877 "listen_address": { 00:14:07.877 "trtype": "TCP", 00:14:07.878 "adrfam": "IPv4", 00:14:07.878 "traddr": "10.0.0.3", 00:14:07.878 "trsvcid": "4420" 00:14:07.878 }, 00:14:07.878 "peer_address": { 00:14:07.878 "trtype": "TCP", 00:14:07.878 "adrfam": "IPv4", 00:14:07.878 "traddr": "10.0.0.1", 00:14:07.878 "trsvcid": "32950" 00:14:07.878 }, 00:14:07.878 "auth": { 00:14:07.878 "state": "completed", 00:14:07.878 "digest": "sha384", 00:14:07.878 "dhgroup": "ffdhe8192" 00:14:07.878 } 00:14:07.878 } 00:14:07.878 ]' 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.878 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.136 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:08.136 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:09.068 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:09.326 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:09.326 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.326 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.326 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:09.326 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:09.326 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.327 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.327 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.327 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.327 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.327 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.327 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.327 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.892 00:14:09.892 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.892 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.892 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.458 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.458 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.458 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.458 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.458 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.458 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.458 { 00:14:10.458 "cntlid": 93, 00:14:10.458 "qid": 0, 00:14:10.458 "state": "enabled", 00:14:10.458 "thread": "nvmf_tgt_poll_group_000", 00:14:10.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:10.458 "listen_address": { 00:14:10.458 "trtype": "TCP", 00:14:10.458 "adrfam": "IPv4", 00:14:10.458 "traddr": "10.0.0.3", 00:14:10.458 "trsvcid": "4420" 00:14:10.458 }, 00:14:10.458 "peer_address": { 00:14:10.458 "trtype": "TCP", 00:14:10.458 "adrfam": "IPv4", 00:14:10.458 "traddr": "10.0.0.1", 00:14:10.458 "trsvcid": "60808" 00:14:10.458 }, 00:14:10.458 "auth": { 00:14:10.458 "state": "completed", 00:14:10.458 "digest": "sha384", 00:14:10.458 "dhgroup": "ffdhe8192" 00:14:10.458 } 00:14:10.458 } 00:14:10.458 ]' 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.459 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.026 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:11.026 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:11.593 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.851 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.825 00:14:12.825 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.825 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.825 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.084 { 00:14:13.084 "cntlid": 95, 00:14:13.084 "qid": 0, 00:14:13.084 "state": "enabled", 00:14:13.084 "thread": "nvmf_tgt_poll_group_000", 00:14:13.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:13.084 "listen_address": { 00:14:13.084 "trtype": "TCP", 00:14:13.084 "adrfam": "IPv4", 00:14:13.084 "traddr": "10.0.0.3", 00:14:13.084 "trsvcid": "4420" 00:14:13.084 }, 00:14:13.084 "peer_address": { 00:14:13.084 "trtype": "TCP", 00:14:13.084 "adrfam": "IPv4", 00:14:13.084 "traddr": "10.0.0.1", 00:14:13.084 "trsvcid": "60852" 00:14:13.084 }, 00:14:13.084 "auth": { 00:14:13.084 "state": "completed", 00:14:13.084 "digest": "sha384", 00:14:13.084 "dhgroup": "ffdhe8192" 00:14:13.084 } 00:14:13.084 } 00:14:13.084 ]' 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.084 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.344 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:13.344 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:14.279 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.537 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.795 00:14:14.795 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.795 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.795 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.053 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.053 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.053 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.053 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.312 { 00:14:15.312 "cntlid": 97, 00:14:15.312 "qid": 0, 00:14:15.312 "state": "enabled", 00:14:15.312 "thread": "nvmf_tgt_poll_group_000", 00:14:15.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:15.312 "listen_address": { 00:14:15.312 "trtype": "TCP", 00:14:15.312 "adrfam": "IPv4", 00:14:15.312 "traddr": "10.0.0.3", 00:14:15.312 "trsvcid": "4420" 00:14:15.312 }, 00:14:15.312 "peer_address": { 00:14:15.312 "trtype": "TCP", 00:14:15.312 "adrfam": "IPv4", 00:14:15.312 "traddr": "10.0.0.1", 00:14:15.312 "trsvcid": "60878" 00:14:15.312 }, 00:14:15.312 "auth": { 00:14:15.312 "state": "completed", 00:14:15.312 "digest": "sha512", 00:14:15.312 "dhgroup": "null" 00:14:15.312 } 00:14:15.312 } 00:14:15.312 ]' 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.312 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.570 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:15.571 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:16.507 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.766 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.025 00:14:17.025 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.025 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.025 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.283 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.283 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.283 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.283 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.283 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.283 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.283 { 00:14:17.283 "cntlid": 99, 00:14:17.283 "qid": 0, 00:14:17.283 "state": "enabled", 00:14:17.283 "thread": "nvmf_tgt_poll_group_000", 00:14:17.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:17.283 "listen_address": { 00:14:17.283 "trtype": "TCP", 00:14:17.283 "adrfam": "IPv4", 00:14:17.283 "traddr": "10.0.0.3", 00:14:17.283 "trsvcid": "4420" 00:14:17.283 }, 00:14:17.283 "peer_address": { 00:14:17.283 "trtype": "TCP", 00:14:17.283 "adrfam": "IPv4", 00:14:17.283 "traddr": "10.0.0.1", 00:14:17.283 "trsvcid": "60900" 00:14:17.283 }, 00:14:17.283 "auth": { 00:14:17.283 "state": "completed", 00:14:17.283 "digest": "sha512", 00:14:17.283 "dhgroup": "null" 00:14:17.283 } 00:14:17.283 } 00:14:17.283 ]' 00:14:17.283 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.542 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.542 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.542 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.542 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.542 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.542 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.542 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.801 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:17.801 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:18.368 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.935 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.193 00:14:19.193 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.193 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.193 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.452 { 00:14:19.452 "cntlid": 101, 00:14:19.452 "qid": 0, 00:14:19.452 "state": "enabled", 00:14:19.452 "thread": "nvmf_tgt_poll_group_000", 00:14:19.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:19.452 "listen_address": { 00:14:19.452 "trtype": "TCP", 00:14:19.452 "adrfam": "IPv4", 00:14:19.452 "traddr": "10.0.0.3", 00:14:19.452 "trsvcid": "4420" 00:14:19.452 }, 00:14:19.452 "peer_address": { 00:14:19.452 "trtype": "TCP", 00:14:19.452 "adrfam": "IPv4", 00:14:19.452 "traddr": "10.0.0.1", 00:14:19.452 "trsvcid": "55140" 00:14:19.452 }, 00:14:19.452 "auth": { 00:14:19.452 "state": "completed", 00:14:19.452 "digest": "sha512", 00:14:19.452 "dhgroup": "null" 00:14:19.452 } 00:14:19.452 } 00:14:19.452 ]' 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:19.452 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.711 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.711 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.711 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.969 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:19.969 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:20.537 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.795 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.362 00:14:21.362 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.362 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.362 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.621 { 00:14:21.621 "cntlid": 103, 00:14:21.621 "qid": 0, 00:14:21.621 "state": "enabled", 00:14:21.621 "thread": "nvmf_tgt_poll_group_000", 00:14:21.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:21.621 "listen_address": { 00:14:21.621 "trtype": "TCP", 00:14:21.621 "adrfam": "IPv4", 00:14:21.621 "traddr": "10.0.0.3", 00:14:21.621 "trsvcid": "4420" 00:14:21.621 }, 00:14:21.621 "peer_address": { 00:14:21.621 "trtype": "TCP", 00:14:21.621 "adrfam": "IPv4", 00:14:21.621 "traddr": "10.0.0.1", 00:14:21.621 "trsvcid": "55164" 00:14:21.621 }, 00:14:21.621 "auth": { 00:14:21.621 "state": "completed", 00:14:21.621 "digest": "sha512", 00:14:21.621 "dhgroup": "null" 00:14:21.621 } 00:14:21.621 } 00:14:21.621 ]' 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.621 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.880 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:21.880 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.816 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.383 00:14:23.383 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.383 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.383 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.383 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.383 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.383 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.383 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.641 { 00:14:23.641 "cntlid": 105, 00:14:23.641 "qid": 0, 00:14:23.641 "state": "enabled", 00:14:23.641 "thread": "nvmf_tgt_poll_group_000", 00:14:23.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:23.641 "listen_address": { 00:14:23.641 "trtype": "TCP", 00:14:23.641 "adrfam": "IPv4", 00:14:23.641 "traddr": "10.0.0.3", 00:14:23.641 "trsvcid": "4420" 00:14:23.641 }, 00:14:23.641 "peer_address": { 00:14:23.641 "trtype": "TCP", 00:14:23.641 "adrfam": "IPv4", 00:14:23.641 "traddr": "10.0.0.1", 00:14:23.641 "trsvcid": "55194" 00:14:23.641 }, 00:14:23.641 "auth": { 00:14:23.641 "state": "completed", 00:14:23.641 "digest": "sha512", 00:14:23.641 "dhgroup": "ffdhe2048" 00:14:23.641 } 00:14:23.641 } 00:14:23.641 ]' 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.641 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.899 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:23.899 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:24.834 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.093 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.353 00:14:25.353 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.353 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.353 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.919 { 00:14:25.919 "cntlid": 107, 00:14:25.919 "qid": 0, 00:14:25.919 "state": "enabled", 00:14:25.919 "thread": "nvmf_tgt_poll_group_000", 00:14:25.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:25.919 "listen_address": { 00:14:25.919 "trtype": "TCP", 00:14:25.919 "adrfam": "IPv4", 00:14:25.919 "traddr": "10.0.0.3", 00:14:25.919 "trsvcid": "4420" 00:14:25.919 }, 00:14:25.919 "peer_address": { 00:14:25.919 "trtype": "TCP", 00:14:25.919 "adrfam": "IPv4", 00:14:25.919 "traddr": "10.0.0.1", 00:14:25.919 "trsvcid": "55236" 00:14:25.919 }, 00:14:25.919 "auth": { 00:14:25.919 "state": "completed", 00:14:25.919 "digest": "sha512", 00:14:25.919 "dhgroup": "ffdhe2048" 00:14:25.919 } 00:14:25.919 } 00:14:25.919 ]' 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.919 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.179 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:26.179 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:27.115 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.373 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.632 00:14:27.632 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.632 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.632 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.198 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.198 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.198 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.198 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.198 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.198 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.198 { 00:14:28.198 "cntlid": 109, 00:14:28.198 "qid": 0, 00:14:28.198 "state": "enabled", 00:14:28.198 "thread": "nvmf_tgt_poll_group_000", 00:14:28.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:28.199 "listen_address": { 00:14:28.199 "trtype": "TCP", 00:14:28.199 "adrfam": "IPv4", 00:14:28.199 "traddr": "10.0.0.3", 00:14:28.199 "trsvcid": "4420" 00:14:28.199 }, 00:14:28.199 "peer_address": { 00:14:28.199 "trtype": "TCP", 00:14:28.199 "adrfam": "IPv4", 00:14:28.199 "traddr": "10.0.0.1", 00:14:28.199 "trsvcid": "55258" 00:14:28.199 }, 00:14:28.199 "auth": { 00:14:28.199 "state": "completed", 00:14:28.199 "digest": "sha512", 00:14:28.199 "dhgroup": "ffdhe2048" 00:14:28.199 } 00:14:28.199 } 00:14:28.199 ]' 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.199 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.457 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:28.457 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:29.398 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.657 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.916 00:14:29.916 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.916 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.916 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.482 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.482 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.482 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.482 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.482 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.483 { 00:14:30.483 "cntlid": 111, 00:14:30.483 "qid": 0, 00:14:30.483 "state": "enabled", 00:14:30.483 "thread": "nvmf_tgt_poll_group_000", 00:14:30.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:30.483 "listen_address": { 00:14:30.483 "trtype": "TCP", 00:14:30.483 "adrfam": "IPv4", 00:14:30.483 "traddr": "10.0.0.3", 00:14:30.483 "trsvcid": "4420" 00:14:30.483 }, 00:14:30.483 "peer_address": { 00:14:30.483 "trtype": "TCP", 00:14:30.483 "adrfam": "IPv4", 00:14:30.483 "traddr": "10.0.0.1", 00:14:30.483 "trsvcid": "34524" 00:14:30.483 }, 00:14:30.483 "auth": { 00:14:30.483 "state": "completed", 00:14:30.483 "digest": "sha512", 00:14:30.483 "dhgroup": "ffdhe2048" 00:14:30.483 } 00:14:30.483 } 00:14:30.483 ]' 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.483 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.741 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:30.741 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:31.675 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:31.676 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.953 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.260 00:14:32.260 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.260 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.260 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.517 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.517 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.517 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.517 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.776 { 00:14:32.776 "cntlid": 113, 00:14:32.776 "qid": 0, 00:14:32.776 "state": "enabled", 00:14:32.776 "thread": "nvmf_tgt_poll_group_000", 00:14:32.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:32.776 "listen_address": { 00:14:32.776 "trtype": "TCP", 00:14:32.776 "adrfam": "IPv4", 00:14:32.776 "traddr": "10.0.0.3", 00:14:32.776 "trsvcid": "4420" 00:14:32.776 }, 00:14:32.776 "peer_address": { 00:14:32.776 "trtype": "TCP", 00:14:32.776 "adrfam": "IPv4", 00:14:32.776 "traddr": "10.0.0.1", 00:14:32.776 "trsvcid": "34560" 00:14:32.776 }, 00:14:32.776 "auth": { 00:14:32.776 "state": "completed", 00:14:32.776 "digest": "sha512", 00:14:32.776 "dhgroup": "ffdhe3072" 00:14:32.776 } 00:14:32.776 } 00:14:32.776 ]' 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.776 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.343 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:33.343 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:33.909 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.476 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.734 00:14:34.734 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.734 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.734 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.300 { 00:14:35.300 "cntlid": 115, 00:14:35.300 "qid": 0, 00:14:35.300 "state": "enabled", 00:14:35.300 "thread": "nvmf_tgt_poll_group_000", 00:14:35.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:35.300 "listen_address": { 00:14:35.300 "trtype": "TCP", 00:14:35.300 "adrfam": "IPv4", 00:14:35.300 "traddr": "10.0.0.3", 00:14:35.300 "trsvcid": "4420" 00:14:35.300 }, 00:14:35.300 "peer_address": { 00:14:35.300 "trtype": "TCP", 00:14:35.300 "adrfam": "IPv4", 00:14:35.300 "traddr": "10.0.0.1", 00:14:35.300 "trsvcid": "34578" 00:14:35.300 }, 00:14:35.300 "auth": { 00:14:35.300 "state": "completed", 00:14:35.300 "digest": "sha512", 00:14:35.300 "dhgroup": "ffdhe3072" 00:14:35.300 } 00:14:35.300 } 00:14:35.300 ]' 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.300 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.558 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:35.558 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.492 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.058 00:14:37.058 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.058 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.058 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.316 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.316 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.316 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.316 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.316 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.316 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.316 { 00:14:37.316 "cntlid": 117, 00:14:37.316 "qid": 0, 00:14:37.316 "state": "enabled", 00:14:37.316 "thread": "nvmf_tgt_poll_group_000", 00:14:37.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:37.316 "listen_address": { 00:14:37.316 "trtype": "TCP", 00:14:37.316 "adrfam": "IPv4", 00:14:37.316 "traddr": "10.0.0.3", 00:14:37.316 "trsvcid": "4420" 00:14:37.316 }, 00:14:37.316 "peer_address": { 00:14:37.316 "trtype": "TCP", 00:14:37.316 "adrfam": "IPv4", 00:14:37.316 "traddr": "10.0.0.1", 00:14:37.316 "trsvcid": "34618" 00:14:37.316 }, 00:14:37.316 "auth": { 00:14:37.316 "state": "completed", 00:14:37.316 "digest": "sha512", 00:14:37.316 "dhgroup": "ffdhe3072" 00:14:37.316 } 00:14:37.316 } 00:14:37.316 ]' 00:14:37.316 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.575 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.575 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.575 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.575 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.575 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.575 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.575 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.833 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:37.833 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:38.767 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:39.026 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:39.284 00:14:39.543 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.543 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.543 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.800 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.800 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.800 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.800 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.800 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.800 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.800 { 00:14:39.800 "cntlid": 119, 00:14:39.800 "qid": 0, 00:14:39.800 "state": "enabled", 00:14:39.800 "thread": "nvmf_tgt_poll_group_000", 00:14:39.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:39.800 "listen_address": { 00:14:39.800 "trtype": "TCP", 00:14:39.800 "adrfam": "IPv4", 00:14:39.800 "traddr": "10.0.0.3", 00:14:39.800 "trsvcid": "4420" 00:14:39.800 }, 00:14:39.800 "peer_address": { 00:14:39.800 "trtype": "TCP", 00:14:39.800 "adrfam": "IPv4", 00:14:39.800 "traddr": "10.0.0.1", 00:14:39.800 "trsvcid": "37368" 00:14:39.800 }, 00:14:39.800 "auth": { 00:14:39.800 "state": "completed", 00:14:39.800 "digest": "sha512", 00:14:39.800 "dhgroup": "ffdhe3072" 00:14:39.800 } 00:14:39.800 } 00:14:39.800 ]' 00:14:39.800 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.801 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.801 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.801 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.801 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.801 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.801 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.801 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.059 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:40.059 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:40.992 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:40.993 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.251 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.817 00:14:41.817 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.817 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.817 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.075 { 00:14:42.075 "cntlid": 121, 00:14:42.075 "qid": 0, 00:14:42.075 "state": "enabled", 00:14:42.075 "thread": "nvmf_tgt_poll_group_000", 00:14:42.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:42.075 "listen_address": { 00:14:42.075 "trtype": "TCP", 00:14:42.075 "adrfam": "IPv4", 00:14:42.075 "traddr": "10.0.0.3", 00:14:42.075 "trsvcid": "4420" 00:14:42.075 }, 00:14:42.075 "peer_address": { 00:14:42.075 "trtype": "TCP", 00:14:42.075 "adrfam": "IPv4", 00:14:42.075 "traddr": "10.0.0.1", 00:14:42.075 "trsvcid": "37390" 00:14:42.075 }, 00:14:42.075 "auth": { 00:14:42.075 "state": "completed", 00:14:42.075 "digest": "sha512", 00:14:42.075 "dhgroup": "ffdhe4096" 00:14:42.075 } 00:14:42.075 } 00:14:42.075 ]' 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.075 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.641 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:42.641 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:43.208 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:43.466 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:43.466 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.466 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.466 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:43.466 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:43.466 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.467 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.467 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.467 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.467 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.467 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.467 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.467 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.033 00:14:44.033 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.033 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.033 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.291 { 00:14:44.291 "cntlid": 123, 00:14:44.291 "qid": 0, 00:14:44.291 "state": "enabled", 00:14:44.291 "thread": "nvmf_tgt_poll_group_000", 00:14:44.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:44.291 "listen_address": { 00:14:44.291 "trtype": "TCP", 00:14:44.291 "adrfam": "IPv4", 00:14:44.291 "traddr": "10.0.0.3", 00:14:44.291 "trsvcid": "4420" 00:14:44.291 }, 00:14:44.291 "peer_address": { 00:14:44.291 "trtype": "TCP", 00:14:44.291 "adrfam": "IPv4", 00:14:44.291 "traddr": "10.0.0.1", 00:14:44.291 "trsvcid": "37410" 00:14:44.291 }, 00:14:44.291 "auth": { 00:14:44.291 "state": "completed", 00:14:44.291 "digest": "sha512", 00:14:44.291 "dhgroup": "ffdhe4096" 00:14:44.291 } 00:14:44.291 } 00:14:44.291 ]' 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.291 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.548 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.548 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.548 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.805 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:44.805 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:45.370 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.628 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.886 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.886 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.886 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.886 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.144 00:14:46.144 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.144 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.144 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.401 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.401 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.401 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.401 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.401 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.401 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.401 { 00:14:46.401 "cntlid": 125, 00:14:46.401 "qid": 0, 00:14:46.401 "state": "enabled", 00:14:46.401 "thread": "nvmf_tgt_poll_group_000", 00:14:46.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:46.401 "listen_address": { 00:14:46.401 "trtype": "TCP", 00:14:46.401 "adrfam": "IPv4", 00:14:46.401 "traddr": "10.0.0.3", 00:14:46.401 "trsvcid": "4420" 00:14:46.401 }, 00:14:46.401 "peer_address": { 00:14:46.401 "trtype": "TCP", 00:14:46.401 "adrfam": "IPv4", 00:14:46.401 "traddr": "10.0.0.1", 00:14:46.401 "trsvcid": "37442" 00:14:46.401 }, 00:14:46.401 "auth": { 00:14:46.401 "state": "completed", 00:14:46.401 "digest": "sha512", 00:14:46.401 "dhgroup": "ffdhe4096" 00:14:46.401 } 00:14:46.401 } 00:14:46.401 ]' 00:14:46.401 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.659 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.659 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.659 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.659 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.659 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.659 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.659 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.916 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:46.916 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:47.857 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:48.115 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:48.682 00:14:48.682 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.682 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.682 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.940 { 00:14:48.940 "cntlid": 127, 00:14:48.940 "qid": 0, 00:14:48.940 "state": "enabled", 00:14:48.940 "thread": "nvmf_tgt_poll_group_000", 00:14:48.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:48.940 "listen_address": { 00:14:48.940 "trtype": "TCP", 00:14:48.940 "adrfam": "IPv4", 00:14:48.940 "traddr": "10.0.0.3", 00:14:48.940 "trsvcid": "4420" 00:14:48.940 }, 00:14:48.940 "peer_address": { 00:14:48.940 "trtype": "TCP", 00:14:48.940 "adrfam": "IPv4", 00:14:48.940 "traddr": "10.0.0.1", 00:14:48.940 "trsvcid": "40806" 00:14:48.940 }, 00:14:48.940 "auth": { 00:14:48.940 "state": "completed", 00:14:48.940 "digest": "sha512", 00:14:48.940 "dhgroup": "ffdhe4096" 00:14:48.940 } 00:14:48.940 } 00:14:48.940 ]' 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.940 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.940 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.940 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.940 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.229 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:49.229 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:50.177 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.435 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.002 00:14:51.002 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.002 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.002 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.261 { 00:14:51.261 "cntlid": 129, 00:14:51.261 "qid": 0, 00:14:51.261 "state": "enabled", 00:14:51.261 "thread": "nvmf_tgt_poll_group_000", 00:14:51.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:51.261 "listen_address": { 00:14:51.261 "trtype": "TCP", 00:14:51.261 "adrfam": "IPv4", 00:14:51.261 "traddr": "10.0.0.3", 00:14:51.261 "trsvcid": "4420" 00:14:51.261 }, 00:14:51.261 "peer_address": { 00:14:51.261 "trtype": "TCP", 00:14:51.261 "adrfam": "IPv4", 00:14:51.261 "traddr": "10.0.0.1", 00:14:51.261 "trsvcid": "40830" 00:14:51.261 }, 00:14:51.261 "auth": { 00:14:51.261 "state": "completed", 00:14:51.261 "digest": "sha512", 00:14:51.261 "dhgroup": "ffdhe6144" 00:14:51.261 } 00:14:51.261 } 00:14:51.261 ]' 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.261 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.519 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.519 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.519 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.777 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:51.777 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:52.343 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.909 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.166 00:14:53.166 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.166 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.166 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.733 { 00:14:53.733 "cntlid": 131, 00:14:53.733 "qid": 0, 00:14:53.733 "state": "enabled", 00:14:53.733 "thread": "nvmf_tgt_poll_group_000", 00:14:53.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:53.733 "listen_address": { 00:14:53.733 "trtype": "TCP", 00:14:53.733 "adrfam": "IPv4", 00:14:53.733 "traddr": "10.0.0.3", 00:14:53.733 "trsvcid": "4420" 00:14:53.733 }, 00:14:53.733 "peer_address": { 00:14:53.733 "trtype": "TCP", 00:14:53.733 "adrfam": "IPv4", 00:14:53.733 "traddr": "10.0.0.1", 00:14:53.733 "trsvcid": "40862" 00:14:53.733 }, 00:14:53.733 "auth": { 00:14:53.733 "state": "completed", 00:14:53.733 "digest": "sha512", 00:14:53.733 "dhgroup": "ffdhe6144" 00:14:53.733 } 00:14:53.733 } 00:14:53.733 ]' 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.733 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.734 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:53.734 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.734 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.734 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.734 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.300 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:54.301 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:54.866 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.124 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.690 00:14:55.690 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.690 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.690 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.949 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.949 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.949 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.949 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.949 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.949 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.949 { 00:14:55.949 "cntlid": 133, 00:14:55.949 "qid": 0, 00:14:55.949 "state": "enabled", 00:14:55.949 "thread": "nvmf_tgt_poll_group_000", 00:14:55.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:55.949 "listen_address": { 00:14:55.949 "trtype": "TCP", 00:14:55.949 "adrfam": "IPv4", 00:14:55.949 "traddr": "10.0.0.3", 00:14:55.949 "trsvcid": "4420" 00:14:55.949 }, 00:14:55.949 "peer_address": { 00:14:55.949 "trtype": "TCP", 00:14:55.949 "adrfam": "IPv4", 00:14:55.949 "traddr": "10.0.0.1", 00:14:55.949 "trsvcid": "40880" 00:14:55.949 }, 00:14:55.949 "auth": { 00:14:55.949 "state": "completed", 00:14:55.949 "digest": "sha512", 00:14:55.949 "dhgroup": "ffdhe6144" 00:14:55.949 } 00:14:55.949 } 00:14:55.949 ]' 00:14:55.949 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.207 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.207 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.207 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.207 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.207 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.207 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.207 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.466 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:56.466 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:57.033 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.331 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.898 00:14:57.898 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.898 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.898 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.156 { 00:14:58.156 "cntlid": 135, 00:14:58.156 "qid": 0, 00:14:58.156 "state": "enabled", 00:14:58.156 "thread": "nvmf_tgt_poll_group_000", 00:14:58.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:14:58.156 "listen_address": { 00:14:58.156 "trtype": "TCP", 00:14:58.156 "adrfam": "IPv4", 00:14:58.156 "traddr": "10.0.0.3", 00:14:58.156 "trsvcid": "4420" 00:14:58.156 }, 00:14:58.156 "peer_address": { 00:14:58.156 "trtype": "TCP", 00:14:58.156 "adrfam": "IPv4", 00:14:58.156 "traddr": "10.0.0.1", 00:14:58.156 "trsvcid": "40910" 00:14:58.156 }, 00:14:58.156 "auth": { 00:14:58.156 "state": "completed", 00:14:58.156 "digest": "sha512", 00:14:58.156 "dhgroup": "ffdhe6144" 00:14:58.156 } 00:14:58.156 } 00:14:58.156 ]' 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.156 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.415 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:58.415 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.415 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.415 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.415 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.673 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:58.673 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.606 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.539 00:15:00.539 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.539 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.539 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.798 { 00:15:00.798 "cntlid": 137, 00:15:00.798 "qid": 0, 00:15:00.798 "state": "enabled", 00:15:00.798 "thread": "nvmf_tgt_poll_group_000", 00:15:00.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:00.798 "listen_address": { 00:15:00.798 "trtype": "TCP", 00:15:00.798 "adrfam": "IPv4", 00:15:00.798 "traddr": "10.0.0.3", 00:15:00.798 "trsvcid": "4420" 00:15:00.798 }, 00:15:00.798 "peer_address": { 00:15:00.798 "trtype": "TCP", 00:15:00.798 "adrfam": "IPv4", 00:15:00.798 "traddr": "10.0.0.1", 00:15:00.798 "trsvcid": "34824" 00:15:00.798 }, 00:15:00.798 "auth": { 00:15:00.798 "state": "completed", 00:15:00.798 "digest": "sha512", 00:15:00.798 "dhgroup": "ffdhe8192" 00:15:00.798 } 00:15:00.798 } 00:15:00.798 ]' 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.798 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.424 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:15:01.424 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:01.990 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.249 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.186 00:15:03.186 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.186 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.186 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.443 { 00:15:03.443 "cntlid": 139, 00:15:03.443 "qid": 0, 00:15:03.443 "state": "enabled", 00:15:03.443 "thread": "nvmf_tgt_poll_group_000", 00:15:03.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:03.443 "listen_address": { 00:15:03.443 "trtype": "TCP", 00:15:03.443 "adrfam": "IPv4", 00:15:03.443 "traddr": "10.0.0.3", 00:15:03.443 "trsvcid": "4420" 00:15:03.443 }, 00:15:03.443 "peer_address": { 00:15:03.443 "trtype": "TCP", 00:15:03.443 "adrfam": "IPv4", 00:15:03.443 "traddr": "10.0.0.1", 00:15:03.443 "trsvcid": "34844" 00:15:03.443 }, 00:15:03.443 "auth": { 00:15:03.443 "state": "completed", 00:15:03.443 "digest": "sha512", 00:15:03.443 "dhgroup": "ffdhe8192" 00:15:03.443 } 00:15:03.443 } 00:15:03.443 ]' 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.443 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.010 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:15:04.010 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: --dhchap-ctrl-secret DHHC-1:02:MTMwY2NmN2ZmYTg3NDZhM2VmNGQ2MTMxZjVlMGJjOTUzMzMzODE2NjU4YjQ0MTk205/0Vw==: 00:15:04.599 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.600 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:04.600 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.600 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.600 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.600 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.600 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:04.600 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.858 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.424 00:15:05.682 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.682 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.682 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.941 { 00:15:05.941 "cntlid": 141, 00:15:05.941 "qid": 0, 00:15:05.941 "state": "enabled", 00:15:05.941 "thread": "nvmf_tgt_poll_group_000", 00:15:05.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:05.941 "listen_address": { 00:15:05.941 "trtype": "TCP", 00:15:05.941 "adrfam": "IPv4", 00:15:05.941 "traddr": "10.0.0.3", 00:15:05.941 "trsvcid": "4420" 00:15:05.941 }, 00:15:05.941 "peer_address": { 00:15:05.941 "trtype": "TCP", 00:15:05.941 "adrfam": "IPv4", 00:15:05.941 "traddr": "10.0.0.1", 00:15:05.941 "trsvcid": "34866" 00:15:05.941 }, 00:15:05.941 "auth": { 00:15:05.941 "state": "completed", 00:15:05.941 "digest": "sha512", 00:15:05.941 "dhgroup": "ffdhe8192" 00:15:05.941 } 00:15:05.941 } 00:15:05.941 ]' 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.941 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.941 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.941 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.941 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.941 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.941 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.507 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:15:06.507 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:01:ZmI0ZWZlNmY5ZTBmYzg5NDUxYjk0MTBmOGM5M2VjOGSWiqvU: 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:07.075 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:07.333 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:07.333 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.333 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:07.333 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:07.333 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:07.333 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.334 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:15:07.334 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.334 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.334 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.334 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:07.334 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.334 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.900 00:15:07.900 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.900 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.900 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.485 { 00:15:08.485 "cntlid": 143, 00:15:08.485 "qid": 0, 00:15:08.485 "state": "enabled", 00:15:08.485 "thread": "nvmf_tgt_poll_group_000", 00:15:08.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:08.485 "listen_address": { 00:15:08.485 "trtype": "TCP", 00:15:08.485 "adrfam": "IPv4", 00:15:08.485 "traddr": "10.0.0.3", 00:15:08.485 "trsvcid": "4420" 00:15:08.485 }, 00:15:08.485 "peer_address": { 00:15:08.485 "trtype": "TCP", 00:15:08.485 "adrfam": "IPv4", 00:15:08.485 "traddr": "10.0.0.1", 00:15:08.485 "trsvcid": "34894" 00:15:08.485 }, 00:15:08.485 "auth": { 00:15:08.485 "state": "completed", 00:15:08.485 "digest": "sha512", 00:15:08.485 "dhgroup": "ffdhe8192" 00:15:08.485 } 00:15:08.485 } 00:15:08.485 ]' 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.485 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.744 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:15:08.744 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:15:09.677 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:09.678 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.936 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.502 00:15:10.502 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.502 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.502 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.761 { 00:15:10.761 "cntlid": 145, 00:15:10.761 "qid": 0, 00:15:10.761 "state": "enabled", 00:15:10.761 "thread": "nvmf_tgt_poll_group_000", 00:15:10.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:10.761 "listen_address": { 00:15:10.761 "trtype": "TCP", 00:15:10.761 "adrfam": "IPv4", 00:15:10.761 "traddr": "10.0.0.3", 00:15:10.761 "trsvcid": "4420" 00:15:10.761 }, 00:15:10.761 "peer_address": { 00:15:10.761 "trtype": "TCP", 00:15:10.761 "adrfam": "IPv4", 00:15:10.761 "traddr": "10.0.0.1", 00:15:10.761 "trsvcid": "47872" 00:15:10.761 }, 00:15:10.761 "auth": { 00:15:10.761 "state": "completed", 00:15:10.761 "digest": "sha512", 00:15:10.761 "dhgroup": "ffdhe8192" 00:15:10.761 } 00:15:10.761 } 00:15:10.761 ]' 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.761 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.019 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.019 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.019 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.019 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.019 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.277 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:15:11.277 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:00:MjQwNzc1ZjMwZGYxMTk3OTc0NzExMzNjNzM1YTc2MTM5OTZjYzExNTI2ZjdjODg521ffuA==: --dhchap-ctrl-secret DHHC-1:03:MjQwNTY5YzMzYzhiNmFhYzE5MjRlMTQzNGE3NWNkMjFjM2NmNmFmMDYzYzM3YjFmODBjZGExMWVhMjM2NTlmMn652dg=: 00:15:11.863 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:11.863 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:12.834 request: 00:15:12.834 { 00:15:12.834 "name": "nvme0", 00:15:12.834 "trtype": "tcp", 00:15:12.834 "traddr": "10.0.0.3", 00:15:12.834 "adrfam": "ipv4", 00:15:12.834 "trsvcid": "4420", 00:15:12.834 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:12.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:12.834 "prchk_reftag": false, 00:15:12.834 "prchk_guard": false, 00:15:12.834 "hdgst": false, 00:15:12.834 "ddgst": false, 00:15:12.834 "dhchap_key": "key2", 00:15:12.834 "allow_unrecognized_csi": false, 00:15:12.834 "method": "bdev_nvme_attach_controller", 00:15:12.834 "req_id": 1 00:15:12.834 } 00:15:12.834 Got JSON-RPC error response 00:15:12.834 response: 00:15:12.834 { 00:15:12.834 "code": -5, 00:15:12.834 "message": "Input/output error" 00:15:12.834 } 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.834 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:12.835 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:13.399 request: 00:15:13.399 { 00:15:13.399 "name": "nvme0", 00:15:13.399 "trtype": "tcp", 00:15:13.399 "traddr": "10.0.0.3", 00:15:13.399 "adrfam": "ipv4", 00:15:13.399 "trsvcid": "4420", 00:15:13.399 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:13.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:13.399 "prchk_reftag": false, 00:15:13.399 "prchk_guard": false, 00:15:13.399 "hdgst": false, 00:15:13.399 "ddgst": false, 00:15:13.399 "dhchap_key": "key1", 00:15:13.399 "dhchap_ctrlr_key": "ckey2", 00:15:13.400 "allow_unrecognized_csi": false, 00:15:13.400 "method": "bdev_nvme_attach_controller", 00:15:13.400 "req_id": 1 00:15:13.400 } 00:15:13.400 Got JSON-RPC error response 00:15:13.400 response: 00:15:13.400 { 00:15:13.400 "code": -5, 00:15:13.400 "message": "Input/output error" 00:15:13.400 } 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.400 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.965 request: 00:15:13.965 { 00:15:13.965 "name": "nvme0", 00:15:13.965 "trtype": "tcp", 00:15:13.965 "traddr": "10.0.0.3", 00:15:13.965 "adrfam": "ipv4", 00:15:13.965 "trsvcid": "4420", 00:15:13.965 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:13.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:13.965 "prchk_reftag": false, 00:15:13.965 "prchk_guard": false, 00:15:13.965 "hdgst": false, 00:15:13.965 "ddgst": false, 00:15:13.965 "dhchap_key": "key1", 00:15:13.965 "dhchap_ctrlr_key": "ckey1", 00:15:13.965 "allow_unrecognized_csi": false, 00:15:13.965 "method": "bdev_nvme_attach_controller", 00:15:13.965 "req_id": 1 00:15:13.965 } 00:15:13.965 Got JSON-RPC error response 00:15:13.965 response: 00:15:13.965 { 00:15:13.965 "code": -5, 00:15:13.965 "message": "Input/output error" 00:15:13.965 } 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67768 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67768 ']' 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67768 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67768 00:15:13.965 killing process with pid 67768 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67768' 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67768 00:15:13.965 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67768 00:15:14.531 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:14.531 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:14.531 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.531 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.531 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=70985 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 70985 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70985 ']' 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.532 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70985 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70985 ']' 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.465 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.722 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.722 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:15.722 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:15.722 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.722 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 null0 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yvz 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.oxL ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oxL 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oLv 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.KLy ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KLy 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JGR 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Esk ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Esk 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ufx 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.979 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.238 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.238 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:16.238 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.238 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.171 nvme0n1 00:15:17.171 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.171 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.171 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.429 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.429 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.429 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.429 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.429 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.429 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.429 { 00:15:17.429 "cntlid": 1, 00:15:17.429 "qid": 0, 00:15:17.429 "state": "enabled", 00:15:17.429 "thread": "nvmf_tgt_poll_group_000", 00:15:17.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:17.429 "listen_address": { 00:15:17.429 "trtype": "TCP", 00:15:17.429 "adrfam": "IPv4", 00:15:17.429 "traddr": "10.0.0.3", 00:15:17.429 "trsvcid": "4420" 00:15:17.429 }, 00:15:17.429 "peer_address": { 00:15:17.429 "trtype": "TCP", 00:15:17.429 "adrfam": "IPv4", 00:15:17.429 "traddr": "10.0.0.1", 00:15:17.429 "trsvcid": "47936" 00:15:17.429 }, 00:15:17.429 "auth": { 00:15:17.429 "state": "completed", 00:15:17.429 "digest": "sha512", 00:15:17.429 "dhgroup": "ffdhe8192" 00:15:17.429 } 00:15:17.429 } 00:15:17.429 ]' 00:15:17.429 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.687 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.687 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.687 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.687 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.687 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.687 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.687 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.945 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:15:17.945 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:15:18.877 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.877 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:18.877 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.877 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.877 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.878 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key3 00:15:18.878 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.878 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.878 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.878 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:18.878 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:19.134 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.135 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.392 request: 00:15:19.392 { 00:15:19.392 "name": "nvme0", 00:15:19.393 "trtype": "tcp", 00:15:19.393 "traddr": "10.0.0.3", 00:15:19.393 "adrfam": "ipv4", 00:15:19.393 "trsvcid": "4420", 00:15:19.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:19.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:19.393 "prchk_reftag": false, 00:15:19.393 "prchk_guard": false, 00:15:19.393 "hdgst": false, 00:15:19.393 "ddgst": false, 00:15:19.393 "dhchap_key": "key3", 00:15:19.393 "allow_unrecognized_csi": false, 00:15:19.393 "method": "bdev_nvme_attach_controller", 00:15:19.393 "req_id": 1 00:15:19.393 } 00:15:19.393 Got JSON-RPC error response 00:15:19.393 response: 00:15:19.393 { 00:15:19.393 "code": -5, 00:15:19.393 "message": "Input/output error" 00:15:19.393 } 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:19.393 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.650 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.909 request: 00:15:19.909 { 00:15:19.909 "name": "nvme0", 00:15:19.909 "trtype": "tcp", 00:15:19.909 "traddr": "10.0.0.3", 00:15:19.909 "adrfam": "ipv4", 00:15:19.909 "trsvcid": "4420", 00:15:19.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:19.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:19.909 "prchk_reftag": false, 00:15:19.909 "prchk_guard": false, 00:15:19.909 "hdgst": false, 00:15:19.909 "ddgst": false, 00:15:19.909 "dhchap_key": "key3", 00:15:19.909 "allow_unrecognized_csi": false, 00:15:19.909 "method": "bdev_nvme_attach_controller", 00:15:19.909 "req_id": 1 00:15:19.909 } 00:15:19.909 Got JSON-RPC error response 00:15:19.909 response: 00:15:19.909 { 00:15:19.909 "code": -5, 00:15:19.909 "message": "Input/output error" 00:15:19.909 } 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:19.909 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:20.168 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:20.733 request: 00:15:20.733 { 00:15:20.733 "name": "nvme0", 00:15:20.733 "trtype": "tcp", 00:15:20.733 "traddr": "10.0.0.3", 00:15:20.733 "adrfam": "ipv4", 00:15:20.733 "trsvcid": "4420", 00:15:20.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:20.733 "prchk_reftag": false, 00:15:20.733 "prchk_guard": false, 00:15:20.733 "hdgst": false, 00:15:20.733 "ddgst": false, 00:15:20.733 "dhchap_key": "key0", 00:15:20.733 "dhchap_ctrlr_key": "key1", 00:15:20.733 "allow_unrecognized_csi": false, 00:15:20.733 "method": "bdev_nvme_attach_controller", 00:15:20.733 "req_id": 1 00:15:20.733 } 00:15:20.733 Got JSON-RPC error response 00:15:20.733 response: 00:15:20.733 { 00:15:20.733 "code": -5, 00:15:20.733 "message": "Input/output error" 00:15:20.733 } 00:15:20.734 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:20.734 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:20.734 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:20.734 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:20.734 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:20.734 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:20.734 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:21.299 nvme0n1 00:15:21.300 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:21.300 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:21.300 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.557 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.557 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.557 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.817 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 00:15:21.817 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.817 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.817 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.817 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:21.817 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:21.817 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:22.753 nvme0n1 00:15:22.753 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:22.753 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.753 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:23.011 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.270 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.270 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:15:23.270 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --hostid 88f52f68-80e5-4327-8a21-70d63145da24 -l 0 --dhchap-secret DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: --dhchap-ctrl-secret DHHC-1:03:MzEyMDdiZGUxMWM5ZTk1YmYwYWE2NTBjNjlkNzdhNmEzMTJmMzllM2RiNTcxZjlhYzg0NTM2MjQxZDQ2OGNiOOK0jxM=: 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:24.205 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:25.164 request: 00:15:25.164 { 00:15:25.164 "name": "nvme0", 00:15:25.164 "trtype": "tcp", 00:15:25.164 "traddr": "10.0.0.3", 00:15:25.164 "adrfam": "ipv4", 00:15:25.164 "trsvcid": "4420", 00:15:25.164 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:25.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24", 00:15:25.164 "prchk_reftag": false, 00:15:25.164 "prchk_guard": false, 00:15:25.164 "hdgst": false, 00:15:25.164 "ddgst": false, 00:15:25.164 "dhchap_key": "key1", 00:15:25.164 "allow_unrecognized_csi": false, 00:15:25.164 "method": "bdev_nvme_attach_controller", 00:15:25.164 "req_id": 1 00:15:25.164 } 00:15:25.165 Got JSON-RPC error response 00:15:25.165 response: 00:15:25.165 { 00:15:25.165 "code": -5, 00:15:25.165 "message": "Input/output error" 00:15:25.165 } 00:15:25.165 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:25.165 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:25.165 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:25.165 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:25.165 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.165 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.165 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:26.101 nvme0n1 00:15:26.101 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:26.101 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.101 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:26.359 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.360 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.360 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.924 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:26.924 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.925 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.925 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.925 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:26.925 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:26.925 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:27.183 nvme0n1 00:15:27.183 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:27.183 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:27.183 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: '' 2s 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: ]] 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTFlYjI2Mjc0ODIyY2RhYTc0MDk5ZmRlY2RmNjVhNDMiv1Pi: 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:27.751 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: 2s 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: ]] 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Mjc0MTkzYjlhNzNkNTMzYTZhNzFlOGE5YTRiN2JlYjBmNGE3MGMzZTVjMWY1ZmM1GqW9mg==: 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:30.306 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:15:32.210 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.210 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.210 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.210 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.210 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.210 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:32.210 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:32.210 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:33.145 nvme0n1 00:15:33.145 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.145 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.145 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.145 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.145 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.145 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.712 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:33.712 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:33.712 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.350 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:34.914 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:35.479 request: 00:15:35.479 { 00:15:35.479 "name": "nvme0", 00:15:35.479 "dhchap_key": "key1", 00:15:35.479 "dhchap_ctrlr_key": "key3", 00:15:35.479 "method": "bdev_nvme_set_keys", 00:15:35.479 "req_id": 1 00:15:35.479 } 00:15:35.479 Got JSON-RPC error response 00:15:35.479 response: 00:15:35.479 { 00:15:35.479 "code": -13, 00:15:35.479 "message": "Permission denied" 00:15:35.479 } 00:15:35.479 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:35.479 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.479 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.479 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.479 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:35.479 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:35.479 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.737 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:35.737 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:36.672 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:36.672 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:36.672 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:36.931 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:38.307 nvme0n1 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:38.307 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:38.875 request: 00:15:38.875 { 00:15:38.875 "name": "nvme0", 00:15:38.875 "dhchap_key": "key2", 00:15:38.875 "dhchap_ctrlr_key": "key0", 00:15:38.875 "method": "bdev_nvme_set_keys", 00:15:38.875 "req_id": 1 00:15:38.875 } 00:15:38.875 Got JSON-RPC error response 00:15:38.875 response: 00:15:38.875 { 00:15:38.875 "code": -13, 00:15:38.875 "message": "Permission denied" 00:15:38.875 } 00:15:38.875 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:38.875 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:38.875 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:38.875 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:38.875 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:38.875 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:38.875 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.132 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:39.132 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:40.071 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:40.071 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:40.071 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67800 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67800 ']' 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67800 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67800 00:15:40.636 killing process with pid 67800 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67800' 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67800 00:15:40.636 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67800 00:15:40.895 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:40.895 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:40.895 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:41.153 rmmod nvme_tcp 00:15:41.153 rmmod nvme_fabrics 00:15:41.153 rmmod nvme_keyring 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 70985 ']' 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 70985 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70985 ']' 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70985 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70985 00:15:41.153 killing process with pid 70985 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70985' 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70985 00:15:41.153 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70985 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:41.411 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Yvz /tmp/spdk.key-sha256.oLv /tmp/spdk.key-sha384.JGR /tmp/spdk.key-sha512.ufx /tmp/spdk.key-sha512.oxL /tmp/spdk.key-sha384.KLy /tmp/spdk.key-sha256.Esk '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:41.670 00:15:41.670 real 3m27.848s 00:15:41.670 user 8m15.799s 00:15:41.670 sys 0m32.783s 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.670 ************************************ 00:15:41.670 END TEST nvmf_auth_target 00:15:41.670 ************************************ 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.670 ************************************ 00:15:41.670 START TEST nvmf_bdevio_no_huge 00:15:41.670 ************************************ 00:15:41.670 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:41.929 * Looking for test storage... 00:15:41.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.929 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.929 --rc genhtml_branch_coverage=1 00:15:41.929 --rc genhtml_function_coverage=1 00:15:41.929 --rc genhtml_legend=1 00:15:41.929 --rc geninfo_all_blocks=1 00:15:41.929 --rc geninfo_unexecuted_blocks=1 00:15:41.929 00:15:41.929 ' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:41.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.930 --rc genhtml_branch_coverage=1 00:15:41.930 --rc genhtml_function_coverage=1 00:15:41.930 --rc genhtml_legend=1 00:15:41.930 --rc geninfo_all_blocks=1 00:15:41.930 --rc geninfo_unexecuted_blocks=1 00:15:41.930 00:15:41.930 ' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:41.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.930 --rc genhtml_branch_coverage=1 00:15:41.930 --rc genhtml_function_coverage=1 00:15:41.930 --rc genhtml_legend=1 00:15:41.930 --rc geninfo_all_blocks=1 00:15:41.930 --rc geninfo_unexecuted_blocks=1 00:15:41.930 00:15:41.930 ' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:41.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.930 --rc genhtml_branch_coverage=1 00:15:41.930 --rc genhtml_function_coverage=1 00:15:41.930 --rc genhtml_legend=1 00:15:41.930 --rc geninfo_all_blocks=1 00:15:41.930 --rc geninfo_unexecuted_blocks=1 00:15:41.930 00:15:41.930 ' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.930 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:41.930 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:41.931 Cannot find device "nvmf_init_br" 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:41.931 Cannot find device "nvmf_init_br2" 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:41.931 Cannot find device "nvmf_tgt_br" 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.931 Cannot find device "nvmf_tgt_br2" 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:41.931 Cannot find device "nvmf_init_br" 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:41.931 Cannot find device "nvmf_init_br2" 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:41.931 Cannot find device "nvmf_tgt_br" 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:41.931 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:42.190 Cannot find device "nvmf_tgt_br2" 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:42.190 Cannot find device "nvmf_br" 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:42.190 Cannot find device "nvmf_init_if" 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:42.190 Cannot find device "nvmf_init_if2" 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:42.190 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:42.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:15:42.449 00:15:42.449 --- 10.0.0.3 ping statistics --- 00:15:42.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.449 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.449 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.449 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:42.449 00:15:42.449 --- 10.0.0.4 ping statistics --- 00:15:42.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.449 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:42.449 00:15:42.449 --- 10.0.0.1 ping statistics --- 00:15:42.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.449 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:42.449 00:15:42.449 --- 10.0.0.2 ping statistics --- 00:15:42.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.449 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=71652 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 71652 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71652 ']' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.449 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:42.449 [2024-10-01 13:49:52.500893] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:42.449 [2024-10-01 13:49:52.501034] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:42.708 [2024-10-01 13:49:52.653009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.708 [2024-10-01 13:49:52.830829] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.708 [2024-10-01 13:49:52.830951] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.708 [2024-10-01 13:49:52.830967] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.708 [2024-10-01 13:49:52.830978] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.708 [2024-10-01 13:49:52.830988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.708 [2024-10-01 13:49:52.831203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:15:42.708 [2024-10-01 13:49:52.832650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:15:42.708 [2024-10-01 13:49:52.832815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:15:42.708 [2024-10-01 13:49:52.832908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.708 [2024-10-01 13:49:52.840172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.647 [2024-10-01 13:49:53.615696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.647 Malloc0 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.647 [2024-10-01 13:49:53.657396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:15:43.647 { 00:15:43.647 "params": { 00:15:43.647 "name": "Nvme$subsystem", 00:15:43.647 "trtype": "$TEST_TRANSPORT", 00:15:43.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:43.647 "adrfam": "ipv4", 00:15:43.647 "trsvcid": "$NVMF_PORT", 00:15:43.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:43.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:43.647 "hdgst": ${hdgst:-false}, 00:15:43.647 "ddgst": ${ddgst:-false} 00:15:43.647 }, 00:15:43.647 "method": "bdev_nvme_attach_controller" 00:15:43.647 } 00:15:43.647 EOF 00:15:43.647 )") 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:15:43.647 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:15:43.647 "params": { 00:15:43.647 "name": "Nvme1", 00:15:43.647 "trtype": "tcp", 00:15:43.647 "traddr": "10.0.0.3", 00:15:43.647 "adrfam": "ipv4", 00:15:43.647 "trsvcid": "4420", 00:15:43.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.647 "hdgst": false, 00:15:43.647 "ddgst": false 00:15:43.647 }, 00:15:43.647 "method": "bdev_nvme_attach_controller" 00:15:43.647 }' 00:15:43.647 [2024-10-01 13:49:53.728652] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:43.647 [2024-10-01 13:49:53.728786] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71688 ] 00:15:43.905 [2024-10-01 13:49:53.893075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:43.905 [2024-10-01 13:49:54.051843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.905 [2024-10-01 13:49:54.052009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.905 [2024-10-01 13:49:54.052021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.905 [2024-10-01 13:49:54.065995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.164 I/O targets: 00:15:44.164 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:44.164 00:15:44.164 00:15:44.164 CUnit - A unit testing framework for C - Version 2.1-3 00:15:44.164 http://cunit.sourceforge.net/ 00:15:44.164 00:15:44.164 00:15:44.164 Suite: bdevio tests on: Nvme1n1 00:15:44.164 Test: blockdev write read block ...passed 00:15:44.164 Test: blockdev write zeroes read block ...passed 00:15:44.164 Test: blockdev write zeroes read no split ...passed 00:15:44.164 Test: blockdev write zeroes read split ...passed 00:15:44.164 Test: blockdev write zeroes read split partial ...passed 00:15:44.164 Test: blockdev reset ...[2024-10-01 13:49:54.318111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:44.164 [2024-10-01 13:49:54.318426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e33720 (9): Bad file descriptor 00:15:44.164 [2024-10-01 13:49:54.329802] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:44.164 passed 00:15:44.164 Test: blockdev write read 8 blocks ...passed 00:15:44.164 Test: blockdev write read size > 128k ...passed 00:15:44.164 Test: blockdev write read invalid size ...passed 00:15:44.164 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:44.164 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:44.164 Test: blockdev write read max offset ...passed 00:15:44.164 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:44.164 Test: blockdev writev readv 8 blocks ...passed 00:15:44.164 Test: blockdev writev readv 30 x 1block ...passed 00:15:44.164 Test: blockdev writev readv block ...passed 00:15:44.164 Test: blockdev writev readv size > 128k ...passed 00:15:44.164 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:44.164 Test: blockdev comparev and writev ...[2024-10-01 13:49:54.339629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.339682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.339704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.339716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.340115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.340146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.340164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.340175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.340493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.340518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.340536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.340546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.340903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.340944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.340963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.164 [2024-10-01 13:49:54.340974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:44.164 passed 00:15:44.164 Test: blockdev nvme passthru rw ...passed 00:15:44.164 Test: blockdev nvme passthru vendor specific ...[2024-10-01 13:49:54.341835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.164 [2024-10-01 13:49:54.341859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:44.164 [2024-10-01 13:49:54.341992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.164 [2024-10-01 13:49:54.342017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:44.422 [2024-10-01 13:49:54.342149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.422 [2024-10-01 13:49:54.342165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:44.422 passed 00:15:44.422 Test: blockdev nvme admin passthru ...[2024-10-01 13:49:54.342271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.422 [2024-10-01 13:49:54.342292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:44.422 passed 00:15:44.422 Test: blockdev copy ...passed 00:15:44.422 00:15:44.422 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.422 suites 1 1 n/a 0 0 00:15:44.422 tests 23 23 23 0 0 00:15:44.422 asserts 152 152 152 0 n/a 00:15:44.422 00:15:44.422 Elapsed time = 0.182 seconds 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:44.680 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.939 rmmod nvme_tcp 00:15:44.939 rmmod nvme_fabrics 00:15:44.939 rmmod nvme_keyring 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 71652 ']' 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 71652 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71652 ']' 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71652 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71652 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:15:44.939 killing process with pid 71652 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71652' 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71652 00:15:44.939 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71652 00:15:45.505 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:45.506 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:45.764 00:15:45.764 real 0m3.987s 00:15:45.764 user 0m11.955s 00:15:45.764 sys 0m1.685s 00:15:45.764 ************************************ 00:15:45.764 END TEST nvmf_bdevio_no_huge 00:15:45.764 ************************************ 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.764 ************************************ 00:15:45.764 START TEST nvmf_tls 00:15:45.764 ************************************ 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:45.764 * Looking for test storage... 00:15:45.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:45.764 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.025 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:46.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.025 --rc genhtml_branch_coverage=1 00:15:46.025 --rc genhtml_function_coverage=1 00:15:46.025 --rc genhtml_legend=1 00:15:46.025 --rc geninfo_all_blocks=1 00:15:46.025 --rc geninfo_unexecuted_blocks=1 00:15:46.025 00:15:46.025 ' 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:46.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.025 --rc genhtml_branch_coverage=1 00:15:46.025 --rc genhtml_function_coverage=1 00:15:46.025 --rc genhtml_legend=1 00:15:46.025 --rc geninfo_all_blocks=1 00:15:46.025 --rc geninfo_unexecuted_blocks=1 00:15:46.025 00:15:46.025 ' 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:46.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.025 --rc genhtml_branch_coverage=1 00:15:46.025 --rc genhtml_function_coverage=1 00:15:46.025 --rc genhtml_legend=1 00:15:46.025 --rc geninfo_all_blocks=1 00:15:46.025 --rc geninfo_unexecuted_blocks=1 00:15:46.025 00:15:46.025 ' 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:46.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.025 --rc genhtml_branch_coverage=1 00:15:46.025 --rc genhtml_function_coverage=1 00:15:46.025 --rc genhtml_legend=1 00:15:46.025 --rc geninfo_all_blocks=1 00:15:46.025 --rc geninfo_unexecuted_blocks=1 00:15:46.025 00:15:46.025 ' 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.025 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.026 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:46.026 Cannot find device "nvmf_init_br" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:46.026 Cannot find device "nvmf_init_br2" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:46.026 Cannot find device "nvmf_tgt_br" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.026 Cannot find device "nvmf_tgt_br2" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:46.026 Cannot find device "nvmf_init_br" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:46.026 Cannot find device "nvmf_init_br2" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:46.026 Cannot find device "nvmf_tgt_br" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:46.026 Cannot find device "nvmf_tgt_br2" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:46.026 Cannot find device "nvmf_br" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:46.026 Cannot find device "nvmf_init_if" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:46.026 Cannot find device "nvmf_init_if2" 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.026 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:46.027 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.027 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.027 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:46.027 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:46.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:15:46.297 00:15:46.297 --- 10.0.0.3 ping statistics --- 00:15:46.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.297 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:46.297 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:46.297 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:15:46.297 00:15:46.297 --- 10.0.0.4 ping statistics --- 00:15:46.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.297 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:46.297 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:15:46.555 00:15:46.555 --- 10.0.0.1 ping statistics --- 00:15:46.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.555 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:46.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:46.555 00:15:46.555 --- 10.0.0.2 ping statistics --- 00:15:46.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.555 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=71927 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 71927 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71927 ']' 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.555 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.555 [2024-10-01 13:49:56.581121] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:46.555 [2024-10-01 13:49:56.581219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.555 [2024-10-01 13:49:56.719061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.812 [2024-10-01 13:49:56.835007] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.812 [2024-10-01 13:49:56.835063] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.812 [2024-10-01 13:49:56.835075] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.812 [2024-10-01 13:49:56.835084] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.812 [2024-10-01 13:49:56.835091] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.812 [2024-10-01 13:49:56.835121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.744 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.744 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:47.745 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:47.745 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:47.745 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.745 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.745 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:47.745 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:48.002 true 00:15:48.002 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:48.002 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:48.260 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:48.260 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:48.260 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:48.518 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:48.518 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:48.776 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:48.776 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:48.776 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:49.034 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:49.034 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:49.360 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:49.360 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:49.360 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:49.360 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:49.619 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:49.619 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:49.619 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:50.184 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:50.184 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:50.184 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:50.184 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:50.184 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:50.440 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:50.440 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:15:50.698 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.qpQnjnEcNq 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.PK6msyk05u 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.qpQnjnEcNq 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.PK6msyk05u 00:15:50.955 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:51.213 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:51.471 [2024-10-01 13:50:01.627457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:51.730 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.qpQnjnEcNq 00:15:51.730 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qpQnjnEcNq 00:15:51.730 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:51.988 [2024-10-01 13:50:02.014255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.988 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:52.246 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:52.504 [2024-10-01 13:50:02.558420] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:52.504 [2024-10-01 13:50:02.558731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:52.504 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:52.762 malloc0 00:15:52.762 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:53.020 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qpQnjnEcNq 00:15:53.279 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:53.538 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qpQnjnEcNq 00:16:05.733 Initializing NVMe Controllers 00:16:05.733 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:05.733 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:05.733 Initialization complete. Launching workers. 00:16:05.733 ======================================================== 00:16:05.733 Latency(us) 00:16:05.733 Device Information : IOPS MiB/s Average min max 00:16:05.733 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7629.70 29.80 8391.08 1550.78 12599.79 00:16:05.733 ======================================================== 00:16:05.733 Total : 7629.70 29.80 8391.08 1550.78 12599.79 00:16:05.733 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qpQnjnEcNq 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qpQnjnEcNq 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72175 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72175 /var/tmp/bdevperf.sock 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72175 ']' 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.733 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.733 [2024-10-01 13:50:13.955275] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:05.733 [2024-10-01 13:50:13.955386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72175 ] 00:16:05.733 [2024-10-01 13:50:14.096187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.733 [2024-10-01 13:50:14.281878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.733 [2024-10-01 13:50:14.365516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:05.733 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.733 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:05.733 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qpQnjnEcNq 00:16:05.733 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:05.733 [2024-10-01 13:50:15.549085] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.733 TLSTESTn1 00:16:05.733 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:05.733 Running I/O for 10 seconds... 00:16:16.034 3157.00 IOPS, 12.33 MiB/s 3154.00 IOPS, 12.32 MiB/s 3157.33 IOPS, 12.33 MiB/s 3136.00 IOPS, 12.25 MiB/s 3123.20 IOPS, 12.20 MiB/s 3136.00 IOPS, 12.25 MiB/s 3145.14 IOPS, 12.29 MiB/s 3149.25 IOPS, 12.30 MiB/s 3126.00 IOPS, 12.21 MiB/s 3127.50 IOPS, 12.22 MiB/s 00:16:16.034 Latency(us) 00:16:16.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.034 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:16.034 Verification LBA range: start 0x0 length 0x2000 00:16:16.034 TLSTESTn1 : 10.03 3131.79 12.23 0.00 0.00 40773.76 5510.98 30384.87 00:16:16.034 =================================================================================================================== 00:16:16.034 Total : 3131.79 12.23 0.00 0.00 40773.76 5510.98 30384.87 00:16:16.034 { 00:16:16.034 "results": [ 00:16:16.034 { 00:16:16.034 "job": "TLSTESTn1", 00:16:16.034 "core_mask": "0x4", 00:16:16.034 "workload": "verify", 00:16:16.034 "status": "finished", 00:16:16.034 "verify_range": { 00:16:16.034 "start": 0, 00:16:16.034 "length": 8192 00:16:16.034 }, 00:16:16.034 "queue_depth": 128, 00:16:16.034 "io_size": 4096, 00:16:16.034 "runtime": 10.026852, 00:16:16.034 "iops": 3131.7905161061517, 00:16:16.034 "mibps": 12.233556703539655, 00:16:16.034 "io_failed": 0, 00:16:16.034 "io_timeout": 0, 00:16:16.034 "avg_latency_us": 40773.764109523996, 00:16:16.034 "min_latency_us": 5510.981818181818, 00:16:16.034 "max_latency_us": 30384.872727272726 00:16:16.034 } 00:16:16.034 ], 00:16:16.034 "core_count": 1 00:16:16.034 } 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72175 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72175 ']' 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72175 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72175 00:16:16.034 killing process with pid 72175 00:16:16.034 Received shutdown signal, test time was about 10.000000 seconds 00:16:16.034 00:16:16.034 Latency(us) 00:16:16.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.034 =================================================================================================================== 00:16:16.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72175' 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72175 00:16:16.034 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72175 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PK6msyk05u 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PK6msyk05u 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PK6msyk05u 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PK6msyk05u 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72311 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72311 /var/tmp/bdevperf.sock 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72311 ']' 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.329 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.329 [2024-10-01 13:50:26.290735] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:16.329 [2024-10-01 13:50:26.290867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72311 ] 00:16:16.329 [2024-10-01 13:50:26.430904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.586 [2024-10-01 13:50:26.592359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.586 [2024-10-01 13:50:26.671087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.520 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.520 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:17.520 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PK6msyk05u 00:16:17.520 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:17.778 [2024-10-01 13:50:27.952713] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.036 [2024-10-01 13:50:27.958816] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:18.036 [2024-10-01 13:50:27.959243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1090 (107): Transport endpoint is not connected 00:16:18.036 [2024-10-01 13:50:27.960223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1090 (9): Bad file descriptor 00:16:18.036 [2024-10-01 13:50:27.961233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:18.036 [2024-10-01 13:50:27.961262] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:18.036 [2024-10-01 13:50:27.961292] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:18.036 [2024-10-01 13:50:27.961316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:18.036 request: 00:16:18.036 { 00:16:18.036 "name": "TLSTEST", 00:16:18.036 "trtype": "tcp", 00:16:18.036 "traddr": "10.0.0.3", 00:16:18.036 "adrfam": "ipv4", 00:16:18.036 "trsvcid": "4420", 00:16:18.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.036 "prchk_reftag": false, 00:16:18.036 "prchk_guard": false, 00:16:18.036 "hdgst": false, 00:16:18.036 "ddgst": false, 00:16:18.036 "psk": "key0", 00:16:18.036 "allow_unrecognized_csi": false, 00:16:18.036 "method": "bdev_nvme_attach_controller", 00:16:18.036 "req_id": 1 00:16:18.036 } 00:16:18.036 Got JSON-RPC error response 00:16:18.036 response: 00:16:18.036 { 00:16:18.036 "code": -5, 00:16:18.036 "message": "Input/output error" 00:16:18.036 } 00:16:18.036 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72311 00:16:18.036 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72311 ']' 00:16:18.036 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72311 00:16:18.036 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:18.036 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.036 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72311 00:16:18.036 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:18.036 killing process with pid 72311 00:16:18.036 Received shutdown signal, test time was about 10.000000 seconds 00:16:18.036 00:16:18.036 Latency(us) 00:16:18.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.036 =================================================================================================================== 00:16:18.036 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.036 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:18.036 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72311' 00:16:18.036 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72311 00:16:18.036 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72311 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qpQnjnEcNq 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qpQnjnEcNq 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qpQnjnEcNq 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qpQnjnEcNq 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72345 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72345 /var/tmp/bdevperf.sock 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72345 ']' 00:16:18.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:18.294 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.294 [2024-10-01 13:50:28.430521] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:18.294 [2024-10-01 13:50:28.430632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72345 ] 00:16:18.552 [2024-10-01 13:50:28.568133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.552 [2024-10-01 13:50:28.724978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.809 [2024-10-01 13:50:28.806921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:19.415 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.415 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:19.415 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qpQnjnEcNq 00:16:19.979 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:19.979 [2024-10-01 13:50:30.134506] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:19.979 [2024-10-01 13:50:30.140213] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:19.979 [2024-10-01 13:50:30.140888] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:19.979 [2024-10-01 13:50:30.141443] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:19.979 [2024-10-01 13:50:30.141837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109c090 (107): Transport endpoint is not connected 00:16:19.979 [2024-10-01 13:50:30.142827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109c090 (9): Bad file descriptor 00:16:19.979 [2024-10-01 13:50:30.143824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:19.979 [2024-10-01 13:50:30.143996] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:19.979 [2024-10-01 13:50:30.144129] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:19.979 [2024-10-01 13:50:30.144284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:19.979 request: 00:16:19.979 { 00:16:19.979 "name": "TLSTEST", 00:16:19.979 "trtype": "tcp", 00:16:19.979 "traddr": "10.0.0.3", 00:16:19.979 "adrfam": "ipv4", 00:16:19.979 "trsvcid": "4420", 00:16:19.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.979 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:19.979 "prchk_reftag": false, 00:16:19.979 "prchk_guard": false, 00:16:19.979 "hdgst": false, 00:16:19.979 "ddgst": false, 00:16:19.979 "psk": "key0", 00:16:19.979 "allow_unrecognized_csi": false, 00:16:19.979 "method": "bdev_nvme_attach_controller", 00:16:19.979 "req_id": 1 00:16:19.979 } 00:16:19.979 Got JSON-RPC error response 00:16:19.979 response: 00:16:19.979 { 00:16:19.979 "code": -5, 00:16:19.979 "message": "Input/output error" 00:16:19.979 } 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72345 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72345 ']' 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72345 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72345 00:16:20.237 killing process with pid 72345 00:16:20.237 Received shutdown signal, test time was about 10.000000 seconds 00:16:20.237 00:16:20.237 Latency(us) 00:16:20.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.237 =================================================================================================================== 00:16:20.237 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72345' 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72345 00:16:20.237 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72345 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qpQnjnEcNq 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qpQnjnEcNq 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:20.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qpQnjnEcNq 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qpQnjnEcNq 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72379 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72379 /var/tmp/bdevperf.sock 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72379 ']' 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:20.495 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.496 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.496 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.496 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.496 [2024-10-01 13:50:30.606134] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:20.496 [2024-10-01 13:50:30.606528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72379 ] 00:16:20.753 [2024-10-01 13:50:30.747103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.753 [2024-10-01 13:50:30.904833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.011 [2024-10-01 13:50:30.986551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:21.576 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.576 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:21.576 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qpQnjnEcNq 00:16:22.150 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:22.150 [2024-10-01 13:50:32.283595] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:22.150 [2024-10-01 13:50:32.289632] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:22.150 [2024-10-01 13:50:32.289988] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:22.150 [2024-10-01 13:50:32.290202] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:22.150 [2024-10-01 13:50:32.290311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa55090 (107): Transport endpoint is not connected 00:16:22.150 [2024-10-01 13:50:32.291294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa55090 (9): Bad file descriptor 00:16:22.150 [2024-10-01 13:50:32.292291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:22.150 [2024-10-01 13:50:32.292470] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:22.150 [2024-10-01 13:50:32.292488] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:22.150 [2024-10-01 13:50:32.292502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:22.150 request: 00:16:22.150 { 00:16:22.150 "name": "TLSTEST", 00:16:22.150 "trtype": "tcp", 00:16:22.150 "traddr": "10.0.0.3", 00:16:22.150 "adrfam": "ipv4", 00:16:22.150 "trsvcid": "4420", 00:16:22.150 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:22.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.150 "prchk_reftag": false, 00:16:22.150 "prchk_guard": false, 00:16:22.150 "hdgst": false, 00:16:22.150 "ddgst": false, 00:16:22.150 "psk": "key0", 00:16:22.150 "allow_unrecognized_csi": false, 00:16:22.150 "method": "bdev_nvme_attach_controller", 00:16:22.150 "req_id": 1 00:16:22.150 } 00:16:22.150 Got JSON-RPC error response 00:16:22.150 response: 00:16:22.150 { 00:16:22.150 "code": -5, 00:16:22.150 "message": "Input/output error" 00:16:22.150 } 00:16:22.150 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72379 00:16:22.150 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72379 ']' 00:16:22.150 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72379 00:16:22.150 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:22.150 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.415 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72379 00:16:22.415 killing process with pid 72379 00:16:22.415 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.415 00:16:22.415 Latency(us) 00:16:22.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.415 =================================================================================================================== 00:16:22.415 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.415 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:22.415 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:22.415 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72379' 00:16:22.415 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72379 00:16:22.415 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72379 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72418 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72418 /var/tmp/bdevperf.sock 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72418 ']' 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.675 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.675 [2024-10-01 13:50:32.765241] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:22.675 [2024-10-01 13:50:32.765370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72418 ] 00:16:22.933 [2024-10-01 13:50:32.905504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.933 [2024-10-01 13:50:33.051950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.190 [2024-10-01 13:50:33.132216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.756 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.756 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:23.756 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:24.014 [2024-10-01 13:50:34.052086] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:24.014 [2024-10-01 13:50:34.052160] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:24.014 request: 00:16:24.014 { 00:16:24.014 "name": "key0", 00:16:24.014 "path": "", 00:16:24.014 "method": "keyring_file_add_key", 00:16:24.014 "req_id": 1 00:16:24.014 } 00:16:24.014 Got JSON-RPC error response 00:16:24.014 response: 00:16:24.014 { 00:16:24.014 "code": -1, 00:16:24.014 "message": "Operation not permitted" 00:16:24.014 } 00:16:24.014 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:24.272 [2024-10-01 13:50:34.332345] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:24.272 [2024-10-01 13:50:34.332464] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:24.272 request: 00:16:24.272 { 00:16:24.272 "name": "TLSTEST", 00:16:24.272 "trtype": "tcp", 00:16:24.272 "traddr": "10.0.0.3", 00:16:24.272 "adrfam": "ipv4", 00:16:24.272 "trsvcid": "4420", 00:16:24.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.272 "prchk_reftag": false, 00:16:24.272 "prchk_guard": false, 00:16:24.272 "hdgst": false, 00:16:24.272 "ddgst": false, 00:16:24.272 "psk": "key0", 00:16:24.272 "allow_unrecognized_csi": false, 00:16:24.272 "method": "bdev_nvme_attach_controller", 00:16:24.272 "req_id": 1 00:16:24.272 } 00:16:24.272 Got JSON-RPC error response 00:16:24.272 response: 00:16:24.272 { 00:16:24.272 "code": -126, 00:16:24.272 "message": "Required key not available" 00:16:24.272 } 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72418 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72418 ']' 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72418 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72418 00:16:24.272 killing process with pid 72418 00:16:24.272 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.272 00:16:24.272 Latency(us) 00:16:24.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.272 =================================================================================================================== 00:16:24.272 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72418' 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72418 00:16:24.272 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72418 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71927 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71927 ']' 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71927 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71927 00:16:24.840 killing process with pid 71927 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71927' 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71927 00:16:24.840 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71927 00:16:24.840 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:24.840 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:24.840 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:16:24.840 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:16:24.840 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:24.840 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:16:24.840 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.QoaiFeB8BL 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.QoaiFeB8BL 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72463 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72463 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72463 ']' 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.098 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.098 [2024-10-01 13:50:35.132814] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:25.099 [2024-10-01 13:50:35.132944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.099 [2024-10-01 13:50:35.272592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.357 [2024-10-01 13:50:35.386499] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.357 [2024-10-01 13:50:35.386570] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.357 [2024-10-01 13:50:35.386583] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.357 [2024-10-01 13:50:35.386591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.357 [2024-10-01 13:50:35.386599] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.357 [2024-10-01 13:50:35.386629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.357 [2024-10-01 13:50:35.441318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:26.031 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.031 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:26.031 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:26.031 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:26.031 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.290 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.QoaiFeB8BL 00:16:26.290 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QoaiFeB8BL 00:16:26.290 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:26.548 [2024-10-01 13:50:36.547297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.548 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:26.806 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:27.064 [2024-10-01 13:50:37.171418] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.064 [2024-10-01 13:50:37.171663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:27.064 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:27.630 malloc0 00:16:27.630 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:27.888 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:16:28.147 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QoaiFeB8BL 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QoaiFeB8BL 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72524 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72524 /var/tmp/bdevperf.sock 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72524 ']' 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.406 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.406 [2024-10-01 13:50:38.552273] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:28.406 [2024-10-01 13:50:38.552670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72524 ] 00:16:28.665 [2024-10-01 13:50:38.694865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.951 [2024-10-01 13:50:38.869702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.951 [2024-10-01 13:50:38.949707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.527 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.527 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:29.527 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:16:29.786 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:30.044 [2024-10-01 13:50:40.057414] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:30.044 TLSTESTn1 00:16:30.044 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:30.304 Running I/O for 10 seconds... 00:16:40.162 3847.00 IOPS, 15.03 MiB/s 3833.50 IOPS, 14.97 MiB/s 3838.00 IOPS, 14.99 MiB/s 3842.75 IOPS, 15.01 MiB/s 3843.60 IOPS, 15.01 MiB/s 3855.00 IOPS, 15.06 MiB/s 3855.57 IOPS, 15.06 MiB/s 3857.00 IOPS, 15.07 MiB/s 3854.00 IOPS, 15.05 MiB/s 3855.80 IOPS, 15.06 MiB/s 00:16:40.162 Latency(us) 00:16:40.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.162 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:40.162 Verification LBA range: start 0x0 length 0x2000 00:16:40.162 TLSTESTn1 : 10.02 3861.74 15.08 0.00 0.00 33087.91 5838.66 24069.59 00:16:40.162 =================================================================================================================== 00:16:40.162 Total : 3861.74 15.08 0.00 0.00 33087.91 5838.66 24069.59 00:16:40.162 { 00:16:40.162 "results": [ 00:16:40.162 { 00:16:40.162 "job": "TLSTESTn1", 00:16:40.162 "core_mask": "0x4", 00:16:40.162 "workload": "verify", 00:16:40.162 "status": "finished", 00:16:40.162 "verify_range": { 00:16:40.162 "start": 0, 00:16:40.162 "length": 8192 00:16:40.162 }, 00:16:40.162 "queue_depth": 128, 00:16:40.162 "io_size": 4096, 00:16:40.162 "runtime": 10.017254, 00:16:40.162 "iops": 3861.736959050854, 00:16:40.162 "mibps": 15.084909996292398, 00:16:40.162 "io_failed": 0, 00:16:40.162 "io_timeout": 0, 00:16:40.162 "avg_latency_us": 33087.90583581655, 00:16:40.162 "min_latency_us": 5838.6618181818185, 00:16:40.162 "max_latency_us": 24069.585454545453 00:16:40.162 } 00:16:40.162 ], 00:16:40.162 "core_count": 1 00:16:40.162 } 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72524 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72524 ']' 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72524 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72524 00:16:40.162 killing process with pid 72524 00:16:40.162 Received shutdown signal, test time was about 10.000000 seconds 00:16:40.162 00:16:40.162 Latency(us) 00:16:40.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.162 =================================================================================================================== 00:16:40.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72524' 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72524 00:16:40.162 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72524 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.QoaiFeB8BL 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QoaiFeB8BL 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QoaiFeB8BL 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QoaiFeB8BL 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QoaiFeB8BL 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72660 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72660 /var/tmp/bdevperf.sock 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72660 ']' 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.729 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.729 [2024-10-01 13:50:50.754935] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:40.729 [2024-10-01 13:50:50.755420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72660 ] 00:16:40.729 [2024-10-01 13:50:50.896876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.988 [2024-10-01 13:50:51.047051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.988 [2024-10-01 13:50:51.123237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.923 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.924 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:41.924 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:16:41.924 [2024-10-01 13:50:51.988107] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QoaiFeB8BL': 0100666 00:16:41.924 [2024-10-01 13:50:51.988550] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:41.924 request: 00:16:41.924 { 00:16:41.924 "name": "key0", 00:16:41.924 "path": "/tmp/tmp.QoaiFeB8BL", 00:16:41.924 "method": "keyring_file_add_key", 00:16:41.924 "req_id": 1 00:16:41.924 } 00:16:41.924 Got JSON-RPC error response 00:16:41.924 response: 00:16:41.924 { 00:16:41.924 "code": -1, 00:16:41.924 "message": "Operation not permitted" 00:16:41.924 } 00:16:41.924 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:42.182 [2024-10-01 13:50:52.244268] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:42.182 [2024-10-01 13:50:52.244353] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:42.182 request: 00:16:42.182 { 00:16:42.182 "name": "TLSTEST", 00:16:42.182 "trtype": "tcp", 00:16:42.182 "traddr": "10.0.0.3", 00:16:42.182 "adrfam": "ipv4", 00:16:42.182 "trsvcid": "4420", 00:16:42.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.183 "prchk_reftag": false, 00:16:42.183 "prchk_guard": false, 00:16:42.183 "hdgst": false, 00:16:42.183 "ddgst": false, 00:16:42.183 "psk": "key0", 00:16:42.183 "allow_unrecognized_csi": false, 00:16:42.183 "method": "bdev_nvme_attach_controller", 00:16:42.183 "req_id": 1 00:16:42.183 } 00:16:42.183 Got JSON-RPC error response 00:16:42.183 response: 00:16:42.183 { 00:16:42.183 "code": -126, 00:16:42.183 "message": "Required key not available" 00:16:42.183 } 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72660 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72660 ']' 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72660 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72660 00:16:42.183 killing process with pid 72660 00:16:42.183 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.183 00:16:42.183 Latency(us) 00:16:42.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.183 =================================================================================================================== 00:16:42.183 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72660' 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72660 00:16:42.183 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72660 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72463 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72463 ']' 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72463 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72463 00:16:42.749 killing process with pid 72463 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72463' 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72463 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72463 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72699 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72699 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72699 ']' 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.749 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.007 [2024-10-01 13:50:52.982369] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:43.007 [2024-10-01 13:50:52.982778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.007 [2024-10-01 13:50:53.119318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.265 [2024-10-01 13:50:53.240381] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.265 [2024-10-01 13:50:53.240448] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.265 [2024-10-01 13:50:53.240477] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.265 [2024-10-01 13:50:53.240486] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.265 [2024-10-01 13:50:53.240494] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.265 [2024-10-01 13:50:53.240523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.265 [2024-10-01 13:50:53.297016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.832 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.832 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:43.832 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:43.832 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.832 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.QoaiFeB8BL 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QoaiFeB8BL 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.QoaiFeB8BL 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QoaiFeB8BL 00:16:44.090 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:44.348 [2024-10-01 13:50:54.331302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:44.633 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:44.933 [2024-10-01 13:50:54.975441] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:44.933 [2024-10-01 13:50:54.975967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:44.933 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:45.191 malloc0 00:16:45.191 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:45.449 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:16:45.707 [2024-10-01 13:50:55.802057] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QoaiFeB8BL': 0100666 00:16:45.707 [2024-10-01 13:50:55.802112] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:45.707 request: 00:16:45.707 { 00:16:45.707 "name": "key0", 00:16:45.707 "path": "/tmp/tmp.QoaiFeB8BL", 00:16:45.707 "method": "keyring_file_add_key", 00:16:45.707 "req_id": 1 00:16:45.707 } 00:16:45.707 Got JSON-RPC error response 00:16:45.707 response: 00:16:45.707 { 00:16:45.707 "code": -1, 00:16:45.707 "message": "Operation not permitted" 00:16:45.707 } 00:16:45.707 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:45.966 [2024-10-01 13:50:56.114151] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:45.966 [2024-10-01 13:50:56.114243] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:45.966 request: 00:16:45.966 { 00:16:45.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.966 "host": "nqn.2016-06.io.spdk:host1", 00:16:45.966 "psk": "key0", 00:16:45.966 "method": "nvmf_subsystem_add_host", 00:16:45.966 "req_id": 1 00:16:45.966 } 00:16:45.966 Got JSON-RPC error response 00:16:45.966 response: 00:16:45.966 { 00:16:45.966 "code": -32603, 00:16:45.966 "message": "Internal error" 00:16:45.966 } 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72699 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72699 ']' 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72699 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:45.966 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.225 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72699 00:16:46.225 killing process with pid 72699 00:16:46.225 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:46.225 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:46.225 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72699' 00:16:46.225 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72699 00:16:46.225 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72699 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.QoaiFeB8BL 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72774 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72774 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72774 ']' 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.484 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.484 [2024-10-01 13:50:56.514083] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:46.484 [2024-10-01 13:50:56.514178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.484 [2024-10-01 13:50:56.650256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.742 [2024-10-01 13:50:56.762829] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.742 [2024-10-01 13:50:56.762937] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.742 [2024-10-01 13:50:56.762970] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.742 [2024-10-01 13:50:56.762983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.742 [2024-10-01 13:50:56.762995] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.742 [2024-10-01 13:50:56.763036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.742 [2024-10-01 13:50:56.817732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.QoaiFeB8BL 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QoaiFeB8BL 00:16:47.675 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:47.967 [2024-10-01 13:50:57.892573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.967 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:48.225 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:48.482 [2024-10-01 13:50:58.404680] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:48.482 [2024-10-01 13:50:58.404953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:48.482 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:48.740 malloc0 00:16:48.740 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:48.998 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:16:49.259 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:49.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.516 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72835 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72835 /var/tmp/bdevperf.sock 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72835 ']' 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.517 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.517 [2024-10-01 13:50:59.629379] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:49.517 [2024-10-01 13:50:59.629749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72835 ] 00:16:49.774 [2024-10-01 13:50:59.767376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.774 [2024-10-01 13:50:59.947411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.031 [2024-10-01 13:51:00.031681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:50.597 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.597 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:50.597 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:16:50.854 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:51.111 [2024-10-01 13:51:01.252564] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.369 TLSTESTn1 00:16:51.369 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:51.626 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:51.626 "subsystems": [ 00:16:51.626 { 00:16:51.626 "subsystem": "keyring", 00:16:51.626 "config": [ 00:16:51.626 { 00:16:51.626 "method": "keyring_file_add_key", 00:16:51.626 "params": { 00:16:51.626 "name": "key0", 00:16:51.626 "path": "/tmp/tmp.QoaiFeB8BL" 00:16:51.626 } 00:16:51.626 } 00:16:51.626 ] 00:16:51.626 }, 00:16:51.626 { 00:16:51.626 "subsystem": "iobuf", 00:16:51.626 "config": [ 00:16:51.626 { 00:16:51.626 "method": "iobuf_set_options", 00:16:51.626 "params": { 00:16:51.626 "small_pool_count": 8192, 00:16:51.626 "large_pool_count": 1024, 00:16:51.626 "small_bufsize": 8192, 00:16:51.626 "large_bufsize": 135168 00:16:51.626 } 00:16:51.626 } 00:16:51.626 ] 00:16:51.626 }, 00:16:51.626 { 00:16:51.626 "subsystem": "sock", 00:16:51.626 "config": [ 00:16:51.626 { 00:16:51.626 "method": "sock_set_default_impl", 00:16:51.626 "params": { 00:16:51.626 "impl_name": "uring" 00:16:51.626 } 00:16:51.626 }, 00:16:51.626 { 00:16:51.626 "method": "sock_impl_set_options", 00:16:51.626 "params": { 00:16:51.626 "impl_name": "ssl", 00:16:51.626 "recv_buf_size": 4096, 00:16:51.626 "send_buf_size": 4096, 00:16:51.626 "enable_recv_pipe": true, 00:16:51.626 "enable_quickack": false, 00:16:51.626 "enable_placement_id": 0, 00:16:51.626 "enable_zerocopy_send_server": true, 00:16:51.626 "enable_zerocopy_send_client": false, 00:16:51.626 "zerocopy_threshold": 0, 00:16:51.626 "tls_version": 0, 00:16:51.626 "enable_ktls": false 00:16:51.626 } 00:16:51.626 }, 00:16:51.626 { 00:16:51.626 "method": "sock_impl_set_options", 00:16:51.626 "params": { 00:16:51.626 "impl_name": "posix", 00:16:51.626 "recv_buf_size": 2097152, 00:16:51.626 "send_buf_size": 2097152, 00:16:51.626 "enable_recv_pipe": true, 00:16:51.626 "enable_quickack": false, 00:16:51.626 "enable_placement_id": 0, 00:16:51.626 "enable_zerocopy_send_server": true, 00:16:51.626 "enable_zerocopy_send_client": false, 00:16:51.626 "zerocopy_threshold": 0, 00:16:51.626 "tls_version": 0, 00:16:51.626 "enable_ktls": false 00:16:51.626 } 00:16:51.626 }, 00:16:51.626 { 00:16:51.626 "method": "sock_impl_set_options", 00:16:51.626 "params": { 00:16:51.626 "impl_name": "uring", 00:16:51.626 "recv_buf_size": 2097152, 00:16:51.626 "send_buf_size": 2097152, 00:16:51.626 "enable_recv_pipe": true, 00:16:51.626 "enable_quickack": false, 00:16:51.626 "enable_placement_id": 0, 00:16:51.626 "enable_zerocopy_send_server": false, 00:16:51.626 "enable_zerocopy_send_client": false, 00:16:51.626 "zerocopy_threshold": 0, 00:16:51.626 "tls_version": 0, 00:16:51.626 "enable_ktls": false 00:16:51.626 } 00:16:51.626 } 00:16:51.626 ] 00:16:51.626 }, 00:16:51.627 { 00:16:51.627 "subsystem": "vmd", 00:16:51.627 "config": [] 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "subsystem": "accel", 00:16:51.627 "config": [ 00:16:51.627 { 00:16:51.627 "method": "accel_set_options", 00:16:51.627 "params": { 00:16:51.627 "small_cache_size": 128, 00:16:51.627 "large_cache_size": 16, 00:16:51.627 "task_count": 2048, 00:16:51.627 "sequence_count": 2048, 00:16:51.627 "buf_count": 2048 00:16:51.627 } 00:16:51.627 } 00:16:51.627 ] 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "subsystem": "bdev", 00:16:51.627 "config": [ 00:16:51.627 { 00:16:51.627 "method": "bdev_set_options", 00:16:51.627 "params": { 00:16:51.627 "bdev_io_pool_size": 65535, 00:16:51.627 "bdev_io_cache_size": 256, 00:16:51.627 "bdev_auto_examine": true, 00:16:51.627 "iobuf_small_cache_size": 128, 00:16:51.627 "iobuf_large_cache_size": 16 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "bdev_raid_set_options", 00:16:51.627 "params": { 00:16:51.627 "process_window_size_kb": 1024, 00:16:51.627 "process_max_bandwidth_mb_sec": 0 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "bdev_iscsi_set_options", 00:16:51.627 "params": { 00:16:51.627 "timeout_sec": 30 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "bdev_nvme_set_options", 00:16:51.627 "params": { 00:16:51.627 "action_on_timeout": "none", 00:16:51.627 "timeout_us": 0, 00:16:51.627 "timeout_admin_us": 0, 00:16:51.627 "keep_alive_timeout_ms": 10000, 00:16:51.627 "arbitration_burst": 0, 00:16:51.627 "low_priority_weight": 0, 00:16:51.627 "medium_priority_weight": 0, 00:16:51.627 "high_priority_weight": 0, 00:16:51.627 "nvme_adminq_poll_period_us": 10000, 00:16:51.627 "nvme_ioq_poll_period_us": 0, 00:16:51.627 "io_queue_requests": 0, 00:16:51.627 "delay_cmd_submit": true, 00:16:51.627 "transport_retry_count": 4, 00:16:51.627 "bdev_retry_count": 3, 00:16:51.627 "transport_ack_timeout": 0, 00:16:51.627 "ctrlr_loss_timeout_sec": 0, 00:16:51.627 "reconnect_delay_sec": 0, 00:16:51.627 "fast_io_fail_timeout_sec": 0, 00:16:51.627 "disable_auto_failback": false, 00:16:51.627 "generate_uuids": false, 00:16:51.627 "transport_tos": 0, 00:16:51.627 "nvme_error_stat": false, 00:16:51.627 "rdma_srq_size": 0, 00:16:51.627 "io_path_stat": false, 00:16:51.627 "allow_accel_sequence": false, 00:16:51.627 "rdma_max_cq_size": 0, 00:16:51.627 "rdma_cm_event_timeout_ms": 0, 00:16:51.627 "dhchap_digests": [ 00:16:51.627 "sha256", 00:16:51.627 "sha384", 00:16:51.627 "sha512" 00:16:51.627 ], 00:16:51.627 "dhchap_dhgroups": [ 00:16:51.627 "null", 00:16:51.627 "ffdhe2048", 00:16:51.627 "ffdhe3072", 00:16:51.627 "ffdhe4096", 00:16:51.627 "ffdhe6144", 00:16:51.627 "ffdhe8192" 00:16:51.627 ] 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "bdev_nvme_set_hotplug", 00:16:51.627 "params": { 00:16:51.627 "period_us": 100000, 00:16:51.627 "enable": false 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "bdev_malloc_create", 00:16:51.627 "params": { 00:16:51.627 "name": "malloc0", 00:16:51.627 "num_blocks": 8192, 00:16:51.627 "block_size": 4096, 00:16:51.627 "physical_block_size": 4096, 00:16:51.627 "uuid": "65db1ddc-8932-44af-b7a6-ad3c67bd71b8", 00:16:51.627 "optimal_io_boundary": 0, 00:16:51.627 "md_size": 0, 00:16:51.627 "dif_type": 0, 00:16:51.627 "dif_is_head_of_md": false, 00:16:51.627 "dif_pi_format": 0 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "bdev_wait_for_examine" 00:16:51.627 } 00:16:51.627 ] 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "subsystem": "nbd", 00:16:51.627 "config": [] 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "subsystem": "scheduler", 00:16:51.627 "config": [ 00:16:51.627 { 00:16:51.627 "method": "framework_set_scheduler", 00:16:51.627 "params": { 00:16:51.627 "name": "static" 00:16:51.627 } 00:16:51.627 } 00:16:51.627 ] 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "subsystem": "nvmf", 00:16:51.627 "config": [ 00:16:51.627 { 00:16:51.627 "method": "nvmf_set_config", 00:16:51.627 "params": { 00:16:51.627 "discovery_filter": "match_any", 00:16:51.627 "admin_cmd_passthru": { 00:16:51.627 "identify_ctrlr": false 00:16:51.627 }, 00:16:51.627 "dhchap_digests": [ 00:16:51.627 "sha256", 00:16:51.627 "sha384", 00:16:51.627 "sha512" 00:16:51.627 ], 00:16:51.627 "dhchap_dhgroups": [ 00:16:51.627 "null", 00:16:51.627 "ffdhe2048", 00:16:51.627 "ffdhe3072", 00:16:51.627 "ffdhe4096", 00:16:51.627 "ffdhe6144", 00:16:51.627 "ffdhe8192" 00:16:51.627 ] 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "nvmf_set_max_subsystems", 00:16:51.627 "params": { 00:16:51.627 "max_subsystems": 1024 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "nvmf_set_crdt", 00:16:51.627 "params": { 00:16:51.627 "crdt1": 0, 00:16:51.627 "crdt2": 0, 00:16:51.627 "crdt3": 0 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "nvmf_create_transport", 00:16:51.627 "params": { 00:16:51.627 "trtype": "TCP", 00:16:51.627 "max_queue_depth": 128, 00:16:51.627 "max_io_qpairs_per_ctrlr": 127, 00:16:51.627 "in_capsule_data_size": 4096, 00:16:51.627 "max_io_size": 131072, 00:16:51.627 "io_unit_size": 131072, 00:16:51.627 "max_aq_depth": 128, 00:16:51.627 "num_shared_buffers": 511, 00:16:51.627 "buf_cache_size": 4294967295, 00:16:51.627 "dif_insert_or_strip": false, 00:16:51.627 "zcopy": false, 00:16:51.627 "c2h_success": false, 00:16:51.627 "sock_priority": 0, 00:16:51.627 "abort_timeout_sec": 1, 00:16:51.627 "ack_timeout": 0, 00:16:51.627 "data_wr_pool_size": 0 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "nvmf_create_subsystem", 00:16:51.627 "params": { 00:16:51.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.627 "allow_any_host": false, 00:16:51.627 "serial_number": "SPDK00000000000001", 00:16:51.627 "model_number": "SPDK bdev Controller", 00:16:51.627 "max_namespaces": 10, 00:16:51.627 "min_cntlid": 1, 00:16:51.627 "max_cntlid": 65519, 00:16:51.627 "ana_reporting": false 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "nvmf_subsystem_add_host", 00:16:51.627 "params": { 00:16:51.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.627 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.627 "psk": "key0" 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "nvmf_subsystem_add_ns", 00:16:51.627 "params": { 00:16:51.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.627 "namespace": { 00:16:51.627 "nsid": 1, 00:16:51.627 "bdev_name": "malloc0", 00:16:51.627 "nguid": "65DB1DDC893244AFB7A6AD3C67BD71B8", 00:16:51.627 "uuid": "65db1ddc-8932-44af-b7a6-ad3c67bd71b8", 00:16:51.627 "no_auto_visible": false 00:16:51.627 } 00:16:51.627 } 00:16:51.627 }, 00:16:51.627 { 00:16:51.627 "method": "nvmf_subsystem_add_listener", 00:16:51.627 "params": { 00:16:51.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.627 "listen_address": { 00:16:51.627 "trtype": "TCP", 00:16:51.627 "adrfam": "IPv4", 00:16:51.627 "traddr": "10.0.0.3", 00:16:51.627 "trsvcid": "4420" 00:16:51.627 }, 00:16:51.627 "secure_channel": true 00:16:51.627 } 00:16:51.627 } 00:16:51.627 ] 00:16:51.627 } 00:16:51.627 ] 00:16:51.627 }' 00:16:51.628 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:52.194 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:52.194 "subsystems": [ 00:16:52.194 { 00:16:52.194 "subsystem": "keyring", 00:16:52.194 "config": [ 00:16:52.194 { 00:16:52.194 "method": "keyring_file_add_key", 00:16:52.194 "params": { 00:16:52.194 "name": "key0", 00:16:52.194 "path": "/tmp/tmp.QoaiFeB8BL" 00:16:52.194 } 00:16:52.194 } 00:16:52.194 ] 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "subsystem": "iobuf", 00:16:52.194 "config": [ 00:16:52.194 { 00:16:52.194 "method": "iobuf_set_options", 00:16:52.194 "params": { 00:16:52.194 "small_pool_count": 8192, 00:16:52.194 "large_pool_count": 1024, 00:16:52.194 "small_bufsize": 8192, 00:16:52.194 "large_bufsize": 135168 00:16:52.194 } 00:16:52.194 } 00:16:52.194 ] 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "subsystem": "sock", 00:16:52.194 "config": [ 00:16:52.194 { 00:16:52.194 "method": "sock_set_default_impl", 00:16:52.194 "params": { 00:16:52.194 "impl_name": "uring" 00:16:52.194 } 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "method": "sock_impl_set_options", 00:16:52.194 "params": { 00:16:52.194 "impl_name": "ssl", 00:16:52.194 "recv_buf_size": 4096, 00:16:52.194 "send_buf_size": 4096, 00:16:52.194 "enable_recv_pipe": true, 00:16:52.194 "enable_quickack": false, 00:16:52.194 "enable_placement_id": 0, 00:16:52.194 "enable_zerocopy_send_server": true, 00:16:52.194 "enable_zerocopy_send_client": false, 00:16:52.194 "zerocopy_threshold": 0, 00:16:52.194 "tls_version": 0, 00:16:52.194 "enable_ktls": false 00:16:52.194 } 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "method": "sock_impl_set_options", 00:16:52.194 "params": { 00:16:52.194 "impl_name": "posix", 00:16:52.194 "recv_buf_size": 2097152, 00:16:52.194 "send_buf_size": 2097152, 00:16:52.194 "enable_recv_pipe": true, 00:16:52.194 "enable_quickack": false, 00:16:52.194 "enable_placement_id": 0, 00:16:52.194 "enable_zerocopy_send_server": true, 00:16:52.194 "enable_zerocopy_send_client": false, 00:16:52.194 "zerocopy_threshold": 0, 00:16:52.194 "tls_version": 0, 00:16:52.194 "enable_ktls": false 00:16:52.194 } 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "method": "sock_impl_set_options", 00:16:52.194 "params": { 00:16:52.194 "impl_name": "uring", 00:16:52.194 "recv_buf_size": 2097152, 00:16:52.194 "send_buf_size": 2097152, 00:16:52.194 "enable_recv_pipe": true, 00:16:52.194 "enable_quickack": false, 00:16:52.194 "enable_placement_id": 0, 00:16:52.194 "enable_zerocopy_send_server": false, 00:16:52.194 "enable_zerocopy_send_client": false, 00:16:52.194 "zerocopy_threshold": 0, 00:16:52.194 "tls_version": 0, 00:16:52.194 "enable_ktls": false 00:16:52.194 } 00:16:52.194 } 00:16:52.194 ] 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "subsystem": "vmd", 00:16:52.194 "config": [] 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "subsystem": "accel", 00:16:52.194 "config": [ 00:16:52.194 { 00:16:52.194 "method": "accel_set_options", 00:16:52.194 "params": { 00:16:52.194 "small_cache_size": 128, 00:16:52.194 "large_cache_size": 16, 00:16:52.194 "task_count": 2048, 00:16:52.194 "sequence_count": 2048, 00:16:52.194 "buf_count": 2048 00:16:52.194 } 00:16:52.194 } 00:16:52.194 ] 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "subsystem": "bdev", 00:16:52.194 "config": [ 00:16:52.194 { 00:16:52.194 "method": "bdev_set_options", 00:16:52.194 "params": { 00:16:52.194 "bdev_io_pool_size": 65535, 00:16:52.194 "bdev_io_cache_size": 256, 00:16:52.194 "bdev_auto_examine": true, 00:16:52.194 "iobuf_small_cache_size": 128, 00:16:52.194 "iobuf_large_cache_size": 16 00:16:52.194 } 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "method": "bdev_raid_set_options", 00:16:52.194 "params": { 00:16:52.194 "process_window_size_kb": 1024, 00:16:52.194 "process_max_bandwidth_mb_sec": 0 00:16:52.194 } 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "method": "bdev_iscsi_set_options", 00:16:52.194 "params": { 00:16:52.194 "timeout_sec": 30 00:16:52.194 } 00:16:52.194 }, 00:16:52.194 { 00:16:52.194 "method": "bdev_nvme_set_options", 00:16:52.194 "params": { 00:16:52.194 "action_on_timeout": "none", 00:16:52.194 "timeout_us": 0, 00:16:52.194 "timeout_admin_us": 0, 00:16:52.195 "keep_alive_timeout_ms": 10000, 00:16:52.195 "arbitration_burst": 0, 00:16:52.195 "low_priority_weight": 0, 00:16:52.195 "medium_priority_weight": 0, 00:16:52.195 "high_priority_weight": 0, 00:16:52.195 "nvme_adminq_poll_period_us": 10000, 00:16:52.195 "nvme_ioq_poll_period_us": 0, 00:16:52.195 "io_queue_requests": 512, 00:16:52.195 "delay_cmd_submit": true, 00:16:52.195 "transport_retry_count": 4, 00:16:52.195 "bdev_retry_count": 3, 00:16:52.195 "transport_ack_timeout": 0, 00:16:52.195 "ctrlr_loss_timeout_sec": 0, 00:16:52.195 "reconnect_delay_sec": 0, 00:16:52.195 "fast_io_fail_timeout_sec": 0, 00:16:52.195 "disable_auto_failback": false, 00:16:52.195 "generate_uuids": false, 00:16:52.195 "transport_tos": 0, 00:16:52.195 "nvme_error_stat": false, 00:16:52.195 "rdma_srq_size": 0, 00:16:52.195 "io_path_stat": false, 00:16:52.195 "allow_accel_sequence": false, 00:16:52.195 "rdma_max_cq_size": 0, 00:16:52.195 "rdma_cm_event_timeout_ms": 0, 00:16:52.195 "dhchap_digests": [ 00:16:52.195 "sha256", 00:16:52.195 "sha384", 00:16:52.195 "sha512" 00:16:52.195 ], 00:16:52.195 "dhchap_dhgroups": [ 00:16:52.195 "null", 00:16:52.195 "ffdhe2048", 00:16:52.195 "ffdhe3072", 00:16:52.195 "ffdhe4096", 00:16:52.195 "ffdhe6144", 00:16:52.195 "ffdhe8192" 00:16:52.195 ] 00:16:52.195 } 00:16:52.195 }, 00:16:52.195 { 00:16:52.195 "method": "bdev_nvme_attach_controller", 00:16:52.195 "params": { 00:16:52.195 "name": "TLSTEST", 00:16:52.195 "trtype": "TCP", 00:16:52.195 "adrfam": "IPv4", 00:16:52.195 "traddr": "10.0.0.3", 00:16:52.195 "trsvcid": "4420", 00:16:52.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.195 "prchk_reftag": false, 00:16:52.195 "prchk_guard": false, 00:16:52.195 "ctrlr_loss_timeout_sec": 0, 00:16:52.195 "reconnect_delay_sec": 0, 00:16:52.195 "fast_io_fail_timeout_sec": 0, 00:16:52.195 "psk": "key0", 00:16:52.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.195 "hdgst": false, 00:16:52.195 "ddgst": false, 00:16:52.195 "multipath": "multipath" 00:16:52.195 } 00:16:52.195 }, 00:16:52.195 { 00:16:52.195 "method": "bdev_nvme_set_hotplug", 00:16:52.195 "params": { 00:16:52.195 "period_us": 100000, 00:16:52.195 "enable": false 00:16:52.195 } 00:16:52.195 }, 00:16:52.195 { 00:16:52.195 "method": "bdev_wait_for_examine" 00:16:52.195 } 00:16:52.195 ] 00:16:52.195 }, 00:16:52.195 { 00:16:52.195 "subsystem": "nbd", 00:16:52.195 "config": [] 00:16:52.195 } 00:16:52.195 ] 00:16:52.195 }' 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72835 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72835 ']' 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72835 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72835 00:16:52.195 killing process with pid 72835 00:16:52.195 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.195 00:16:52.195 Latency(us) 00:16:52.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.195 =================================================================================================================== 00:16:52.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72835' 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72835 00:16:52.195 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72835 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72774 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72774 ']' 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72774 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72774 00:16:52.453 killing process with pid 72774 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72774' 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72774 00:16:52.453 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72774 00:16:52.713 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:52.713 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:52.713 "subsystems": [ 00:16:52.713 { 00:16:52.713 "subsystem": "keyring", 00:16:52.713 "config": [ 00:16:52.713 { 00:16:52.713 "method": "keyring_file_add_key", 00:16:52.713 "params": { 00:16:52.713 "name": "key0", 00:16:52.713 "path": "/tmp/tmp.QoaiFeB8BL" 00:16:52.713 } 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "subsystem": "iobuf", 00:16:52.713 "config": [ 00:16:52.713 { 00:16:52.713 "method": "iobuf_set_options", 00:16:52.713 "params": { 00:16:52.713 "small_pool_count": 8192, 00:16:52.713 "large_pool_count": 1024, 00:16:52.713 "small_bufsize": 8192, 00:16:52.713 "large_bufsize": 135168 00:16:52.713 } 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "subsystem": "sock", 00:16:52.713 "config": [ 00:16:52.713 { 00:16:52.713 "method": "sock_set_default_impl", 00:16:52.713 "params": { 00:16:52.713 "impl_name": "uring" 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "sock_impl_set_options", 00:16:52.713 "params": { 00:16:52.713 "impl_name": "ssl", 00:16:52.713 "recv_buf_size": 4096, 00:16:52.713 "send_buf_size": 4096, 00:16:52.713 "enable_recv_pipe": true, 00:16:52.713 "enable_quickack": false, 00:16:52.713 "enable_placement_id": 0, 00:16:52.713 "enable_zerocopy_send_server": true, 00:16:52.713 "enable_zerocopy_send_client": false, 00:16:52.713 "zerocopy_threshold": 0, 00:16:52.713 "tls_version": 0, 00:16:52.713 "enable_ktls": false 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "sock_impl_set_options", 00:16:52.713 "params": { 00:16:52.713 "impl_name": "posix", 00:16:52.713 "recv_buf_size": 2097152, 00:16:52.713 "send_buf_size": 2097152, 00:16:52.713 "enable_recv_pipe": true, 00:16:52.713 "enable_quickack": false, 00:16:52.713 "enable_placement_id": 0, 00:16:52.713 "enable_zerocopy_send_server": true, 00:16:52.713 "enable_zerocopy_send_client": false, 00:16:52.713 "zerocopy_threshold": 0, 00:16:52.713 "tls_version": 0, 00:16:52.713 "enable_ktls": false 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "sock_impl_set_options", 00:16:52.713 "params": { 00:16:52.713 "impl_name": "uring", 00:16:52.713 "recv_buf_size": 2097152, 00:16:52.713 "send_buf_size": 2097152, 00:16:52.713 "enable_recv_pipe": true, 00:16:52.713 "enable_quickack": false, 00:16:52.713 "enable_placement_id": 0, 00:16:52.713 "enable_zerocopy_send_server": false, 00:16:52.713 "enable_zerocopy_send_client": false, 00:16:52.713 "zerocopy_threshold": 0, 00:16:52.713 "tls_version": 0, 00:16:52.713 "enable_ktls": false 00:16:52.713 } 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "subsystem": "vmd", 00:16:52.713 "config": [] 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "subsystem": "accel", 00:16:52.713 "config": [ 00:16:52.713 { 00:16:52.713 "method": "accel_set_options", 00:16:52.713 "params": { 00:16:52.713 "small_cache_size": 128, 00:16:52.713 "large_cache_size": 16, 00:16:52.713 "task_count": 2048, 00:16:52.713 "sequence_count": 2048, 00:16:52.713 "buf_count": 2048 00:16:52.713 } 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "subsystem": "bdev", 00:16:52.713 "config": [ 00:16:52.713 { 00:16:52.713 "method": "bdev_set_options", 00:16:52.713 "params": { 00:16:52.713 "bdev_io_pool_size": 65535, 00:16:52.713 "bdev_io_cache_size": 256, 00:16:52.713 "bdev_auto_examine": true, 00:16:52.713 "iobuf_small_cache_size": 128, 00:16:52.713 "iobuf_large_cache_size": 16 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "bdev_raid_set_options", 00:16:52.713 "params": { 00:16:52.713 "process_window_size_kb": 1024, 00:16:52.713 "process_max_bandwidth_mb_sec": 0 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "bdev_iscsi_set_options", 00:16:52.713 "params": { 00:16:52.713 "timeout_sec": 30 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "bdev_nvme_set_options", 00:16:52.713 "params": { 00:16:52.713 "action_on_timeout": "none", 00:16:52.713 "timeout_us": 0, 00:16:52.713 "timeout_admin_us": 0, 00:16:52.713 "keep_alive_timeout_ms": 10000, 00:16:52.713 "arbitration_burst": 0, 00:16:52.713 "low_priority_weight": 0, 00:16:52.713 "medium_priority_weight": 0, 00:16:52.713 "high_priority_weight": 0, 00:16:52.713 "nvme_adminq_poll_period_us": 10000, 00:16:52.713 "nvme_ioq_poll_period_us": 0, 00:16:52.713 "io_queue_requests": 0, 00:16:52.713 "delay_cmd_submit": true, 00:16:52.713 "transport_retry_count": 4, 00:16:52.713 "bdev_retry_count": 3, 00:16:52.713 "transport_ack_timeout": 0, 00:16:52.713 "ctrlr_loss_timeout_sec": 0, 00:16:52.713 "reconnect_delay_sec": 0, 00:16:52.713 "fast_io_fail_timeout_sec": 0, 00:16:52.713 "disable_auto_failback": false, 00:16:52.714 "generate_uuids": false, 00:16:52.714 "transport_tos": 0, 00:16:52.714 "nvme_error_stat": false, 00:16:52.714 "rdma_srq_size": 0, 00:16:52.714 "io_path_stat": false, 00:16:52.714 "allow_accel_sequence": false, 00:16:52.714 "rdma_max_cq_size": 0, 00:16:52.714 "rdma_cm_event_timeout_ms": 0, 00:16:52.714 "dhchap_digests": [ 00:16:52.714 "sha256", 00:16:52.714 "sha384", 00:16:52.714 "sha512" 00:16:52.714 ], 00:16:52.714 "dhchap_dhgroups": [ 00:16:52.714 "null", 00:16:52.714 "ffdhe2048", 00:16:52.714 "ffdhe3072", 00:16:52.714 "ffdhe4096", 00:16:52.714 "ffdhe6144", 00:16:52.714 "ffdhe8192" 00:16:52.714 ] 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "bdev_nvme_set_hotplug", 00:16:52.714 "params": { 00:16:52.714 "period_us": 100000, 00:16:52.714 "enable": false 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "bdev_malloc_create", 00:16:52.714 "params": { 00:16:52.714 "name": "malloc0", 00:16:52.714 "num_blocks": 8192, 00:16:52.714 "block_size": 4096, 00:16:52.714 "physical_block_size": 4096, 00:16:52.714 "uuid": "65db1ddc-8932-44af-b7a6-ad3c67bd71b8", 00:16:52.714 "optimal_io_boundary": 0, 00:16:52.714 "md_size": 0, 00:16:52.714 "dif_type": 0, 00:16:52.714 "dif_is_head_of_md": false, 00:16:52.714 "dif_pi_format": 0 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "bdev_wait_for_examine" 00:16:52.714 } 00:16:52.714 ] 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "subsystem": "nbd", 00:16:52.714 "config": [] 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "subsystem": "scheduler", 00:16:52.714 "config": [ 00:16:52.714 { 00:16:52.714 "method": "framework_set_scheduler", 00:16:52.714 "params": { 00:16:52.714 "name": "static" 00:16:52.714 } 00:16:52.714 } 00:16:52.714 ] 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "subsystem": "nvmf", 00:16:52.714 "config": [ 00:16:52.714 { 00:16:52.714 "method": "nvmf_set_config", 00:16:52.714 "params": { 00:16:52.714 "discovery_filter": "match_any", 00:16:52.714 "admin_cmd_passthru": { 00:16:52.714 "identify_ctrlr": false 00:16:52.714 }, 00:16:52.714 "dhchap_digests": [ 00:16:52.714 "sha256", 00:16:52.714 "sha384", 00:16:52.714 "sha512" 00:16:52.714 ], 00:16:52.714 "dhchap_dhgroups": [ 00:16:52.714 "null", 00:16:52.714 "ffdhe2048", 00:16:52.714 "ffdhe3072", 00:16:52.714 "ffdhe4096", 00:16:52.714 "ffdhe6144", 00:16:52.714 "ffdhe8192" 00:16:52.714 ] 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "nvmf_set_max_subsystems", 00:16:52.714 "params": { 00:16:52.714 "max_subsystems": 1024 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "nvmf_set_crdt", 00:16:52.714 "params": { 00:16:52.714 "crdt1": 0, 00:16:52.714 "crdt2": 0, 00:16:52.714 "crdt3": 0 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "nvmf_create_transport", 00:16:52.714 "params": { 00:16:52.714 "trtype": "TCP", 00:16:52.714 "max_queue_depth": 128, 00:16:52.714 "max_io_qpairs_per_ctrlr": 127, 00:16:52.714 "in_capsule_data_size": 4096, 00:16:52.714 "max_io_size": 131072, 00:16:52.714 "io_unit_size": 131072, 00:16:52.714 "max_aq_depth": 128, 00:16:52.714 "num_shared_buffers": 511, 00:16:52.714 "buf_cache_size": 4294967295, 00:16:52.714 "dif_insert_or_strip": false, 00:16:52.714 "zcopy": false, 00:16:52.714 "c2h_success": false, 00:16:52.714 "sock_priority": 0, 00:16:52.714 "abort_timeout_sec": 1, 00:16:52.714 "ack_timeout": 0, 00:16:52.714 "data_wr_pool_size": 0 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "nvmf_create_subsystem", 00:16:52.714 "params": { 00:16:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.714 "allow_any_host": false, 00:16:52.714 "serial_number": "SPDK00000000000001", 00:16:52.714 "model_number": "SPDK bdev Controller", 00:16:52.714 "max_namespaces": 10, 00:16:52.714 "min_cntlid": 1, 00:16:52.714 "max_cntlid": 65519, 00:16:52.714 "ana_reporting": false 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "nvmf_subsystem_add_host", 00:16:52.714 "params": { 00:16:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.714 "host": "nqn.2016-06.io.spdk:host1", 00:16:52.714 "psk": "key0" 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "nvmf_subsystem_add_ns", 00:16:52.714 "params": { 00:16:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.714 "namespace": { 00:16:52.714 "nsid": 1, 00:16:52.714 "bdev_name": "malloc0", 00:16:52.714 "nguid": "65DB1DDC893244AFB7A6AD3C67BD71B8", 00:16:52.714 "uuid": "65db1ddc-8932-44af-b7a6-ad3c67bd71b8", 00:16:52.714 "no_auto_visible": false 00:16:52.714 } 00:16:52.714 } 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "method": "nvmf_subsystem_add_listener", 00:16:52.714 "params": { 00:16:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.714 "listen_address": { 00:16:52.714 "trtype": "TCP", 00:16:52.714 "adrfam": "IPv4", 00:16:52.714 "traddr": "10.0.0.3", 00:16:52.714 "trsvcid": "4420" 00:16:52.714 }, 00:16:52.714 "secure_channel": true 00:16:52.714 } 00:16:52.714 } 00:16:52.714 ] 00:16:52.714 } 00:16:52.714 ] 00:16:52.714 }' 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72889 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72889 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72889 ']' 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.714 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.714 [2024-10-01 13:51:02.806808] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:52.714 [2024-10-01 13:51:02.806961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.973 [2024-10-01 13:51:02.949256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.973 [2024-10-01 13:51:03.063814] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.973 [2024-10-01 13:51:03.063874] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.973 [2024-10-01 13:51:03.063887] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.973 [2024-10-01 13:51:03.063896] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.973 [2024-10-01 13:51:03.063904] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.973 [2024-10-01 13:51:03.064019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.287 [2024-10-01 13:51:03.233640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:53.287 [2024-10-01 13:51:03.316576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.287 [2024-10-01 13:51:03.354627] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.287 [2024-10-01 13:51:03.354845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72921 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72921 /var/tmp/bdevperf.sock 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72921 ']' 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.854 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:53.854 "subsystems": [ 00:16:53.854 { 00:16:53.854 "subsystem": "keyring", 00:16:53.854 "config": [ 00:16:53.854 { 00:16:53.854 "method": "keyring_file_add_key", 00:16:53.854 "params": { 00:16:53.854 "name": "key0", 00:16:53.854 "path": "/tmp/tmp.QoaiFeB8BL" 00:16:53.854 } 00:16:53.854 } 00:16:53.854 ] 00:16:53.854 }, 00:16:53.854 { 00:16:53.854 "subsystem": "iobuf", 00:16:53.854 "config": [ 00:16:53.854 { 00:16:53.854 "method": "iobuf_set_options", 00:16:53.854 "params": { 00:16:53.854 "small_pool_count": 8192, 00:16:53.854 "large_pool_count": 1024, 00:16:53.854 "small_bufsize": 8192, 00:16:53.854 "large_bufsize": 135168 00:16:53.854 } 00:16:53.854 } 00:16:53.854 ] 00:16:53.854 }, 00:16:53.854 { 00:16:53.854 "subsystem": "sock", 00:16:53.855 "config": [ 00:16:53.855 { 00:16:53.855 "method": "sock_set_default_impl", 00:16:53.855 "params": { 00:16:53.855 "impl_name": "uring" 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "sock_impl_set_options", 00:16:53.855 "params": { 00:16:53.855 "impl_name": "ssl", 00:16:53.855 "recv_buf_size": 4096, 00:16:53.855 "send_buf_size": 4096, 00:16:53.855 "enable_recv_pipe": true, 00:16:53.855 "enable_quickack": false, 00:16:53.855 "enable_placement_id": 0, 00:16:53.855 "enable_zerocopy_send_server": true, 00:16:53.855 "enable_zerocopy_send_client": false, 00:16:53.855 "zerocopy_threshold": 0, 00:16:53.855 "tls_version": 0, 00:16:53.855 "enable_ktls": false 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "sock_impl_set_options", 00:16:53.855 "params": { 00:16:53.855 "impl_name": "posix", 00:16:53.855 "recv_buf_size": 2097152, 00:16:53.855 "send_buf_size": 2097152, 00:16:53.855 "enable_recv_pipe": true, 00:16:53.855 "enable_quickack": false, 00:16:53.855 "enable_placement_id": 0, 00:16:53.855 "enable_zerocopy_send_server": true, 00:16:53.855 "enable_zerocopy_send_client": false, 00:16:53.855 "zerocopy_threshold": 0, 00:16:53.855 "tls_version": 0, 00:16:53.855 "enable_ktls": false 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "sock_impl_set_options", 00:16:53.855 "params": { 00:16:53.855 "impl_name": "uring", 00:16:53.855 "recv_buf_size": 2097152, 00:16:53.855 "send_buf_size": 2097152, 00:16:53.855 "enable_recv_pipe": true, 00:16:53.855 "enable_quickack": false, 00:16:53.855 "enable_placement_id": 0, 00:16:53.855 "enable_zerocopy_send_server": false, 00:16:53.855 "enable_zerocopy_send_client": false, 00:16:53.855 "zerocopy_threshold": 0, 00:16:53.855 "tls_version": 0, 00:16:53.855 "enable_ktls": false 00:16:53.855 } 00:16:53.855 } 00:16:53.855 ] 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "subsystem": "vmd", 00:16:53.855 "config": [] 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "subsystem": "accel", 00:16:53.855 "config": [ 00:16:53.855 { 00:16:53.855 "method": "accel_set_options", 00:16:53.855 "params": { 00:16:53.855 "small_cache_size": 128, 00:16:53.855 "large_cache_size": 16, 00:16:53.855 "task_count": 2048, 00:16:53.855 "sequence_count": 2048, 00:16:53.855 "buf_count": 2048 00:16:53.855 } 00:16:53.855 } 00:16:53.855 ] 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "subsystem": "bdev", 00:16:53.855 "config": [ 00:16:53.855 { 00:16:53.855 "method": "bdev_set_options", 00:16:53.855 "params": { 00:16:53.855 "bdev_io_pool_size": 65535, 00:16:53.855 "bdev_io_cache_size": 256, 00:16:53.855 "bdev_auto_examine": true, 00:16:53.855 "iobuf_small_cache_size": 128, 00:16:53.855 "iobuf_large_cache_size": 16 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "bdev_raid_set_options", 00:16:53.855 "params": { 00:16:53.855 "process_window_size_kb": 1024, 00:16:53.855 "process_max_bandwidth_mb_sec": 0 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "bdev_iscsi_set_options", 00:16:53.855 "params": { 00:16:53.855 "timeout_sec": 30 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "bdev_nvme_set_options", 00:16:53.855 "params": { 00:16:53.855 "action_on_timeout": "none", 00:16:53.855 "timeout_us": 0, 00:16:53.855 "timeout_admin_us": 0, 00:16:53.855 "keep_alive_timeout_ms": 10000, 00:16:53.855 "arbitration_burst": 0, 00:16:53.855 "low_priority_weight": 0, 00:16:53.855 "medium_priority_weight": 0, 00:16:53.855 "high_priority_weight": 0, 00:16:53.855 "nvme_adminq_poll_period_us": 10000, 00:16:53.855 "nvme_ioq_poll_period_us": 0, 00:16:53.855 "io_queue_requests": 512, 00:16:53.855 "delay_cmd_submit": true, 00:16:53.855 "transport_retry_count": 4, 00:16:53.855 "bdev_retry_count": 3, 00:16:53.855 "transport_ack_timeout": 0, 00:16:53.855 "ctrlr_loss_timeout_sec": 0, 00:16:53.855 "reconnect_delay_sec": 0, 00:16:53.855 "fast_io_fail_timeout_sec": 0, 00:16:53.855 "disable_auto_failback": false, 00:16:53.855 "generate_uuids": false, 00:16:53.855 "transport_tos": 0, 00:16:53.855 "nvme_error_stat": false, 00:16:53.855 "rdma_srq_size": 0, 00:16:53.855 "io_path_stat": false, 00:16:53.855 "allow_accel_sequence": false, 00:16:53.855 "rdma_max_cq_size": 0, 00:16:53.855 "rdma_cm_event_timeout_ms": 0, 00:16:53.855 "dhchap_digests": [ 00:16:53.855 "sha256", 00:16:53.855 "sha384", 00:16:53.855 "sha512" 00:16:53.855 ], 00:16:53.855 "dhchap_dhgroups": [ 00:16:53.855 "null", 00:16:53.855 "ffdhe2048", 00:16:53.855 "ffdhe3072", 00:16:53.855 "ffdhe4096", 00:16:53.855 "ffdhe6144", 00:16:53.855 "ffdhe8192" 00:16:53.855 ] 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "bdev_nvme_attach_controller", 00:16:53.855 "params": { 00:16:53.855 "name": "TLSTEST", 00:16:53.855 "trtype": "TCP", 00:16:53.855 "adrfam": "IPv4", 00:16:53.855 "traddr": "10.0.0.3", 00:16:53.855 "trsvcid": "4420", 00:16:53.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.855 "prchk_reftag": false, 00:16:53.855 "prchk_guard": false, 00:16:53.855 "ctrlr_loss_timeout_sec": 0, 00:16:53.855 "reconnect_delay_sec": 0, 00:16:53.855 "fast_io_fail_timeout_sec": 0, 00:16:53.855 "psk": "key0", 00:16:53.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.855 "hdgst": false, 00:16:53.855 "ddgst": false, 00:16:53.855 "multipath": "multipath" 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "bdev_nvme_set_hotplug", 00:16:53.855 "params": { 00:16:53.855 "period_us": 100000, 00:16:53.855 "enable": false 00:16:53.855 } 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "method": "bdev_wait_for_examine" 00:16:53.855 } 00:16:53.855 ] 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "subsystem": "nbd", 00:16:53.855 "config": [] 00:16:53.855 } 00:16:53.855 ] 00:16:53.855 }' 00:16:53.855 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.855 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.855 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.855 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.855 [2024-10-01 13:51:03.945159] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:53.855 [2024-10-01 13:51:03.945492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72921 ] 00:16:54.114 [2024-10-01 13:51:04.085445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.114 [2024-10-01 13:51:04.239151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.372 [2024-10-01 13:51:04.398482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:54.372 [2024-10-01 13:51:04.462960] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.937 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.938 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:54.938 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:54.938 Running I/O for 10 seconds... 00:17:05.173 3713.00 IOPS, 14.50 MiB/s 3759.00 IOPS, 14.68 MiB/s 3760.67 IOPS, 14.69 MiB/s 3761.50 IOPS, 14.69 MiB/s 3729.40 IOPS, 14.57 MiB/s 3732.67 IOPS, 14.58 MiB/s 3728.43 IOPS, 14.56 MiB/s 3703.25 IOPS, 14.47 MiB/s 3712.78 IOPS, 14.50 MiB/s 3716.90 IOPS, 14.52 MiB/s 00:17:05.173 Latency(us) 00:17:05.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.173 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:05.173 Verification LBA range: start 0x0 length 0x2000 00:17:05.173 TLSTESTn1 : 10.02 3722.43 14.54 0.00 0.00 34327.70 6464.23 33840.41 00:17:05.173 =================================================================================================================== 00:17:05.173 Total : 3722.43 14.54 0.00 0.00 34327.70 6464.23 33840.41 00:17:05.173 { 00:17:05.173 "results": [ 00:17:05.173 { 00:17:05.173 "job": "TLSTESTn1", 00:17:05.173 "core_mask": "0x4", 00:17:05.173 "workload": "verify", 00:17:05.173 "status": "finished", 00:17:05.173 "verify_range": { 00:17:05.173 "start": 0, 00:17:05.173 "length": 8192 00:17:05.173 }, 00:17:05.173 "queue_depth": 128, 00:17:05.173 "io_size": 4096, 00:17:05.173 "runtime": 10.019258, 00:17:05.173 "iops": 3722.4313417221115, 00:17:05.173 "mibps": 14.540747428601998, 00:17:05.173 "io_failed": 0, 00:17:05.173 "io_timeout": 0, 00:17:05.173 "avg_latency_us": 34327.69804609805, 00:17:05.173 "min_latency_us": 6464.232727272727, 00:17:05.173 "max_latency_us": 33840.40727272727 00:17:05.173 } 00:17:05.173 ], 00:17:05.173 "core_count": 1 00:17:05.173 } 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72921 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72921 ']' 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72921 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72921 00:17:05.173 killing process with pid 72921 00:17:05.173 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.173 00:17:05.173 Latency(us) 00:17:05.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.173 =================================================================================================================== 00:17:05.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72921' 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72921 00:17:05.173 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72921 00:17:05.432 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72889 00:17:05.432 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72889 ']' 00:17:05.432 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72889 00:17:05.432 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:05.432 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.432 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72889 00:17:05.432 killing process with pid 72889 00:17:05.433 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:05.433 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:05.433 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72889' 00:17:05.433 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72889 00:17:05.433 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72889 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=73061 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 73061 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73061 ']' 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.690 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.690 [2024-10-01 13:51:15.860051] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:05.690 [2024-10-01 13:51:15.860164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.949 [2024-10-01 13:51:16.002736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.208 [2024-10-01 13:51:16.132186] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.208 [2024-10-01 13:51:16.132255] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.208 [2024-10-01 13:51:16.132270] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.208 [2024-10-01 13:51:16.132281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.208 [2024-10-01 13:51:16.132290] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.208 [2024-10-01 13:51:16.132330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.208 [2024-10-01 13:51:16.190281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:06.775 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.775 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:06.775 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:06.775 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:06.775 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.033 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.033 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.QoaiFeB8BL 00:17:07.033 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QoaiFeB8BL 00:17:07.033 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.292 [2024-10-01 13:51:17.268846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.292 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:07.556 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:07.815 [2024-10-01 13:51:17.824939] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.815 [2024-10-01 13:51:17.825196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:07.815 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:08.074 malloc0 00:17:08.074 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:08.333 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:17:08.901 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=73120 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 73120 /var/tmp/bdevperf.sock 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73120 ']' 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:08.901 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.159 [2024-10-01 13:51:19.114337] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:09.159 [2024-10-01 13:51:19.114447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73120 ] 00:17:09.159 [2024-10-01 13:51:19.246765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.418 [2024-10-01 13:51:19.361769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.418 [2024-10-01 13:51:19.418173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:10.409 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.409 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:10.409 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:17:10.409 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:10.671 [2024-10-01 13:51:20.816486] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.929 nvme0n1 00:17:10.929 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.929 Running I/O for 1 seconds... 00:17:12.307 3845.00 IOPS, 15.02 MiB/s 00:17:12.307 Latency(us) 00:17:12.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.307 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.307 Verification LBA range: start 0x0 length 0x2000 00:17:12.307 nvme0n1 : 1.02 3902.46 15.24 0.00 0.00 32416.94 651.64 19779.96 00:17:12.307 =================================================================================================================== 00:17:12.307 Total : 3902.46 15.24 0.00 0.00 32416.94 651.64 19779.96 00:17:12.307 { 00:17:12.307 "results": [ 00:17:12.307 { 00:17:12.307 "job": "nvme0n1", 00:17:12.307 "core_mask": "0x2", 00:17:12.307 "workload": "verify", 00:17:12.307 "status": "finished", 00:17:12.307 "verify_range": { 00:17:12.307 "start": 0, 00:17:12.307 "length": 8192 00:17:12.307 }, 00:17:12.307 "queue_depth": 128, 00:17:12.307 "io_size": 4096, 00:17:12.307 "runtime": 1.018332, 00:17:12.307 "iops": 3902.4601014207547, 00:17:12.307 "mibps": 15.243984771174823, 00:17:12.307 "io_failed": 0, 00:17:12.307 "io_timeout": 0, 00:17:12.307 "avg_latency_us": 32416.94373427277, 00:17:12.307 "min_latency_us": 651.6363636363636, 00:17:12.307 "max_latency_us": 19779.956363636364 00:17:12.307 } 00:17:12.307 ], 00:17:12.307 "core_count": 1 00:17:12.307 } 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 73120 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73120 ']' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73120 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73120 00:17:12.307 killing process with pid 73120 00:17:12.307 Received shutdown signal, test time was about 1.000000 seconds 00:17:12.307 00:17:12.307 Latency(us) 00:17:12.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.307 =================================================================================================================== 00:17:12.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73120' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73120 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73120 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 73061 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73061 ']' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73061 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73061 00:17:12.307 killing process with pid 73061 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73061' 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73061 00:17:12.307 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73061 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=73175 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 73175 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73175 ']' 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.566 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.825 [2024-10-01 13:51:22.762308] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:12.825 [2024-10-01 13:51:22.762743] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.825 [2024-10-01 13:51:22.901682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.083 [2024-10-01 13:51:23.007134] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.083 [2024-10-01 13:51:23.007198] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.083 [2024-10-01 13:51:23.007225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.083 [2024-10-01 13:51:23.007233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.083 [2024-10-01 13:51:23.007239] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.083 [2024-10-01 13:51:23.007266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.083 [2024-10-01 13:51:23.066723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.083 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.083 [2024-10-01 13:51:23.195160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.083 malloc0 00:17:13.083 [2024-10-01 13:51:23.241255] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:13.083 [2024-10-01 13:51:23.241736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:13.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=73201 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 73201 /var/tmp/bdevperf.sock 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73201 ']' 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.358 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 [2024-10-01 13:51:23.332351] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:13.358 [2024-10-01 13:51:23.332812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73201 ] 00:17:13.358 [2024-10-01 13:51:23.476179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.616 [2024-10-01 13:51:23.608687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.616 [2024-10-01 13:51:23.666953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:13.616 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.616 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:13.616 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QoaiFeB8BL 00:17:13.935 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:14.502 [2024-10-01 13:51:24.429821] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.502 nvme0n1 00:17:14.503 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:14.503 Running I/O for 1 seconds... 00:17:15.880 3826.00 IOPS, 14.95 MiB/s 00:17:15.880 Latency(us) 00:17:15.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.880 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.880 Verification LBA range: start 0x0 length 0x2000 00:17:15.880 nvme0n1 : 1.03 3849.88 15.04 0.00 0.00 32800.49 7804.74 27644.28 00:17:15.880 =================================================================================================================== 00:17:15.880 Total : 3849.88 15.04 0.00 0.00 32800.49 7804.74 27644.28 00:17:15.880 { 00:17:15.880 "results": [ 00:17:15.880 { 00:17:15.880 "job": "nvme0n1", 00:17:15.880 "core_mask": "0x2", 00:17:15.880 "workload": "verify", 00:17:15.880 "status": "finished", 00:17:15.880 "verify_range": { 00:17:15.880 "start": 0, 00:17:15.880 "length": 8192 00:17:15.880 }, 00:17:15.880 "queue_depth": 128, 00:17:15.880 "io_size": 4096, 00:17:15.880 "runtime": 1.027306, 00:17:15.880 "iops": 3849.875304923752, 00:17:15.880 "mibps": 15.038575409858407, 00:17:15.880 "io_failed": 0, 00:17:15.880 "io_timeout": 0, 00:17:15.880 "avg_latency_us": 32800.48554970693, 00:17:15.880 "min_latency_us": 7804.741818181818, 00:17:15.880 "max_latency_us": 27644.276363636363 00:17:15.880 } 00:17:15.880 ], 00:17:15.880 "core_count": 1 00:17:15.880 } 00:17:15.880 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:15.880 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.880 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.880 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.880 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:15.880 "subsystems": [ 00:17:15.880 { 00:17:15.880 "subsystem": "keyring", 00:17:15.880 "config": [ 00:17:15.880 { 00:17:15.880 "method": "keyring_file_add_key", 00:17:15.880 "params": { 00:17:15.880 "name": "key0", 00:17:15.880 "path": "/tmp/tmp.QoaiFeB8BL" 00:17:15.880 } 00:17:15.880 } 00:17:15.880 ] 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "subsystem": "iobuf", 00:17:15.880 "config": [ 00:17:15.880 { 00:17:15.880 "method": "iobuf_set_options", 00:17:15.880 "params": { 00:17:15.880 "small_pool_count": 8192, 00:17:15.880 "large_pool_count": 1024, 00:17:15.880 "small_bufsize": 8192, 00:17:15.880 "large_bufsize": 135168 00:17:15.880 } 00:17:15.880 } 00:17:15.880 ] 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "subsystem": "sock", 00:17:15.880 "config": [ 00:17:15.880 { 00:17:15.880 "method": "sock_set_default_impl", 00:17:15.880 "params": { 00:17:15.880 "impl_name": "uring" 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "sock_impl_set_options", 00:17:15.880 "params": { 00:17:15.880 "impl_name": "ssl", 00:17:15.880 "recv_buf_size": 4096, 00:17:15.880 "send_buf_size": 4096, 00:17:15.880 "enable_recv_pipe": true, 00:17:15.880 "enable_quickack": false, 00:17:15.880 "enable_placement_id": 0, 00:17:15.880 "enable_zerocopy_send_server": true, 00:17:15.880 "enable_zerocopy_send_client": false, 00:17:15.880 "zerocopy_threshold": 0, 00:17:15.880 "tls_version": 0, 00:17:15.880 "enable_ktls": false 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "sock_impl_set_options", 00:17:15.880 "params": { 00:17:15.880 "impl_name": "posix", 00:17:15.880 "recv_buf_size": 2097152, 00:17:15.880 "send_buf_size": 2097152, 00:17:15.880 "enable_recv_pipe": true, 00:17:15.880 "enable_quickack": false, 00:17:15.880 "enable_placement_id": 0, 00:17:15.880 "enable_zerocopy_send_server": true, 00:17:15.880 "enable_zerocopy_send_client": false, 00:17:15.880 "zerocopy_threshold": 0, 00:17:15.880 "tls_version": 0, 00:17:15.880 "enable_ktls": false 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "sock_impl_set_options", 00:17:15.880 "params": { 00:17:15.880 "impl_name": "uring", 00:17:15.880 "recv_buf_size": 2097152, 00:17:15.880 "send_buf_size": 2097152, 00:17:15.880 "enable_recv_pipe": true, 00:17:15.880 "enable_quickack": false, 00:17:15.880 "enable_placement_id": 0, 00:17:15.880 "enable_zerocopy_send_server": false, 00:17:15.880 "enable_zerocopy_send_client": false, 00:17:15.880 "zerocopy_threshold": 0, 00:17:15.880 "tls_version": 0, 00:17:15.880 "enable_ktls": false 00:17:15.880 } 00:17:15.880 } 00:17:15.880 ] 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "subsystem": "vmd", 00:17:15.880 "config": [] 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "subsystem": "accel", 00:17:15.880 "config": [ 00:17:15.880 { 00:17:15.880 "method": "accel_set_options", 00:17:15.880 "params": { 00:17:15.880 "small_cache_size": 128, 00:17:15.880 "large_cache_size": 16, 00:17:15.880 "task_count": 2048, 00:17:15.880 "sequence_count": 2048, 00:17:15.880 "buf_count": 2048 00:17:15.880 } 00:17:15.880 } 00:17:15.880 ] 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "subsystem": "bdev", 00:17:15.880 "config": [ 00:17:15.880 { 00:17:15.880 "method": "bdev_set_options", 00:17:15.880 "params": { 00:17:15.880 "bdev_io_pool_size": 65535, 00:17:15.880 "bdev_io_cache_size": 256, 00:17:15.880 "bdev_auto_examine": true, 00:17:15.880 "iobuf_small_cache_size": 128, 00:17:15.880 "iobuf_large_cache_size": 16 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "bdev_raid_set_options", 00:17:15.880 "params": { 00:17:15.880 "process_window_size_kb": 1024, 00:17:15.880 "process_max_bandwidth_mb_sec": 0 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "bdev_iscsi_set_options", 00:17:15.880 "params": { 00:17:15.880 "timeout_sec": 30 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "bdev_nvme_set_options", 00:17:15.880 "params": { 00:17:15.880 "action_on_timeout": "none", 00:17:15.880 "timeout_us": 0, 00:17:15.880 "timeout_admin_us": 0, 00:17:15.880 "keep_alive_timeout_ms": 10000, 00:17:15.880 "arbitration_burst": 0, 00:17:15.880 "low_priority_weight": 0, 00:17:15.880 "medium_priority_weight": 0, 00:17:15.880 "high_priority_weight": 0, 00:17:15.880 "nvme_adminq_poll_period_us": 10000, 00:17:15.880 "nvme_ioq_poll_period_us": 0, 00:17:15.880 "io_queue_requests": 0, 00:17:15.880 "delay_cmd_submit": true, 00:17:15.880 "transport_retry_count": 4, 00:17:15.880 "bdev_retry_count": 3, 00:17:15.880 "transport_ack_timeout": 0, 00:17:15.880 "ctrlr_loss_timeout_sec": 0, 00:17:15.880 "reconnect_delay_sec": 0, 00:17:15.880 "fast_io_fail_timeout_sec": 0, 00:17:15.880 "disable_auto_failback": false, 00:17:15.880 "generate_uuids": false, 00:17:15.880 "transport_tos": 0, 00:17:15.880 "nvme_error_stat": false, 00:17:15.880 "rdma_srq_size": 0, 00:17:15.880 "io_path_stat": false, 00:17:15.880 "allow_accel_sequence": false, 00:17:15.880 "rdma_max_cq_size": 0, 00:17:15.880 "rdma_cm_event_timeout_ms": 0, 00:17:15.880 "dhchap_digests": [ 00:17:15.880 "sha256", 00:17:15.880 "sha384", 00:17:15.880 "sha512" 00:17:15.880 ], 00:17:15.880 "dhchap_dhgroups": [ 00:17:15.880 "null", 00:17:15.880 "ffdhe2048", 00:17:15.880 "ffdhe3072", 00:17:15.880 "ffdhe4096", 00:17:15.880 "ffdhe6144", 00:17:15.880 "ffdhe8192" 00:17:15.880 ] 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "bdev_nvme_set_hotplug", 00:17:15.880 "params": { 00:17:15.880 "period_us": 100000, 00:17:15.880 "enable": false 00:17:15.880 } 00:17:15.880 }, 00:17:15.880 { 00:17:15.880 "method": "bdev_malloc_create", 00:17:15.880 "params": { 00:17:15.880 "name": "malloc0", 00:17:15.880 "num_blocks": 8192, 00:17:15.881 "block_size": 4096, 00:17:15.881 "physical_block_size": 4096, 00:17:15.881 "uuid": "fdd17011-4562-437e-a97a-d964229d6d01", 00:17:15.881 "optimal_io_boundary": 0, 00:17:15.881 "md_size": 0, 00:17:15.881 "dif_type": 0, 00:17:15.881 "dif_is_head_of_md": false, 00:17:15.881 "dif_pi_format": 0 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "bdev_wait_for_examine" 00:17:15.881 } 00:17:15.881 ] 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "subsystem": "nbd", 00:17:15.881 "config": [] 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "subsystem": "scheduler", 00:17:15.881 "config": [ 00:17:15.881 { 00:17:15.881 "method": "framework_set_scheduler", 00:17:15.881 "params": { 00:17:15.881 "name": "static" 00:17:15.881 } 00:17:15.881 } 00:17:15.881 ] 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "subsystem": "nvmf", 00:17:15.881 "config": [ 00:17:15.881 { 00:17:15.881 "method": "nvmf_set_config", 00:17:15.881 "params": { 00:17:15.881 "discovery_filter": "match_any", 00:17:15.881 "admin_cmd_passthru": { 00:17:15.881 "identify_ctrlr": false 00:17:15.881 }, 00:17:15.881 "dhchap_digests": [ 00:17:15.881 "sha256", 00:17:15.881 "sha384", 00:17:15.881 "sha512" 00:17:15.881 ], 00:17:15.881 "dhchap_dhgroups": [ 00:17:15.881 "null", 00:17:15.881 "ffdhe2048", 00:17:15.881 "ffdhe3072", 00:17:15.881 "ffdhe4096", 00:17:15.881 "ffdhe6144", 00:17:15.881 "ffdhe8192" 00:17:15.881 ] 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "nvmf_set_max_subsystems", 00:17:15.881 "params": { 00:17:15.881 "max_subsystems": 1024 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "nvmf_set_crdt", 00:17:15.881 "params": { 00:17:15.881 "crdt1": 0, 00:17:15.881 "crdt2": 0, 00:17:15.881 "crdt3": 0 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "nvmf_create_transport", 00:17:15.881 "params": { 00:17:15.881 "trtype": "TCP", 00:17:15.881 "max_queue_depth": 128, 00:17:15.881 "max_io_qpairs_per_ctrlr": 127, 00:17:15.881 "in_capsule_data_size": 4096, 00:17:15.881 "max_io_size": 131072, 00:17:15.881 "io_unit_size": 131072, 00:17:15.881 "max_aq_depth": 128, 00:17:15.881 "num_shared_buffers": 511, 00:17:15.881 "buf_cache_size": 4294967295, 00:17:15.881 "dif_insert_or_strip": false, 00:17:15.881 "zcopy": false, 00:17:15.881 "c2h_success": false, 00:17:15.881 "sock_priority": 0, 00:17:15.881 "abort_timeout_sec": 1, 00:17:15.881 "ack_timeout": 0, 00:17:15.881 "data_wr_pool_size": 0 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "nvmf_create_subsystem", 00:17:15.881 "params": { 00:17:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.881 "allow_any_host": false, 00:17:15.881 "serial_number": "00000000000000000000", 00:17:15.881 "model_number": "SPDK bdev Controller", 00:17:15.881 "max_namespaces": 32, 00:17:15.881 "min_cntlid": 1, 00:17:15.881 "max_cntlid": 65519, 00:17:15.881 "ana_reporting": false 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "nvmf_subsystem_add_host", 00:17:15.881 "params": { 00:17:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.881 "host": "nqn.2016-06.io.spdk:host1", 00:17:15.881 "psk": "key0" 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "nvmf_subsystem_add_ns", 00:17:15.881 "params": { 00:17:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.881 "namespace": { 00:17:15.881 "nsid": 1, 00:17:15.881 "bdev_name": "malloc0", 00:17:15.881 "nguid": "FDD170114562437EA97AD964229D6D01", 00:17:15.881 "uuid": "fdd17011-4562-437e-a97a-d964229d6d01", 00:17:15.881 "no_auto_visible": false 00:17:15.881 } 00:17:15.881 } 00:17:15.881 }, 00:17:15.881 { 00:17:15.881 "method": "nvmf_subsystem_add_listener", 00:17:15.881 "params": { 00:17:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.881 "listen_address": { 00:17:15.881 "trtype": "TCP", 00:17:15.881 "adrfam": "IPv4", 00:17:15.881 "traddr": "10.0.0.3", 00:17:15.881 "trsvcid": "4420" 00:17:15.881 }, 00:17:15.881 "secure_channel": false, 00:17:15.881 "sock_impl": "ssl" 00:17:15.881 } 00:17:15.881 } 00:17:15.881 ] 00:17:15.881 } 00:17:15.881 ] 00:17:15.881 }' 00:17:15.881 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:16.139 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:16.139 "subsystems": [ 00:17:16.139 { 00:17:16.139 "subsystem": "keyring", 00:17:16.139 "config": [ 00:17:16.139 { 00:17:16.139 "method": "keyring_file_add_key", 00:17:16.139 "params": { 00:17:16.139 "name": "key0", 00:17:16.139 "path": "/tmp/tmp.QoaiFeB8BL" 00:17:16.139 } 00:17:16.139 } 00:17:16.139 ] 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "subsystem": "iobuf", 00:17:16.139 "config": [ 00:17:16.139 { 00:17:16.139 "method": "iobuf_set_options", 00:17:16.139 "params": { 00:17:16.139 "small_pool_count": 8192, 00:17:16.139 "large_pool_count": 1024, 00:17:16.139 "small_bufsize": 8192, 00:17:16.139 "large_bufsize": 135168 00:17:16.139 } 00:17:16.139 } 00:17:16.139 ] 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "subsystem": "sock", 00:17:16.139 "config": [ 00:17:16.139 { 00:17:16.139 "method": "sock_set_default_impl", 00:17:16.139 "params": { 00:17:16.139 "impl_name": "uring" 00:17:16.139 } 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "method": "sock_impl_set_options", 00:17:16.139 "params": { 00:17:16.139 "impl_name": "ssl", 00:17:16.139 "recv_buf_size": 4096, 00:17:16.139 "send_buf_size": 4096, 00:17:16.139 "enable_recv_pipe": true, 00:17:16.139 "enable_quickack": false, 00:17:16.139 "enable_placement_id": 0, 00:17:16.139 "enable_zerocopy_send_server": true, 00:17:16.139 "enable_zerocopy_send_client": false, 00:17:16.139 "zerocopy_threshold": 0, 00:17:16.139 "tls_version": 0, 00:17:16.139 "enable_ktls": false 00:17:16.139 } 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "method": "sock_impl_set_options", 00:17:16.139 "params": { 00:17:16.139 "impl_name": "posix", 00:17:16.139 "recv_buf_size": 2097152, 00:17:16.139 "send_buf_size": 2097152, 00:17:16.139 "enable_recv_pipe": true, 00:17:16.139 "enable_quickack": false, 00:17:16.139 "enable_placement_id": 0, 00:17:16.139 "enable_zerocopy_send_server": true, 00:17:16.139 "enable_zerocopy_send_client": false, 00:17:16.139 "zerocopy_threshold": 0, 00:17:16.139 "tls_version": 0, 00:17:16.139 "enable_ktls": false 00:17:16.139 } 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "method": "sock_impl_set_options", 00:17:16.139 "params": { 00:17:16.139 "impl_name": "uring", 00:17:16.139 "recv_buf_size": 2097152, 00:17:16.139 "send_buf_size": 2097152, 00:17:16.139 "enable_recv_pipe": true, 00:17:16.139 "enable_quickack": false, 00:17:16.139 "enable_placement_id": 0, 00:17:16.139 "enable_zerocopy_send_server": false, 00:17:16.139 "enable_zerocopy_send_client": false, 00:17:16.139 "zerocopy_threshold": 0, 00:17:16.139 "tls_version": 0, 00:17:16.139 "enable_ktls": false 00:17:16.139 } 00:17:16.139 } 00:17:16.139 ] 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "subsystem": "vmd", 00:17:16.139 "config": [] 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "subsystem": "accel", 00:17:16.139 "config": [ 00:17:16.139 { 00:17:16.139 "method": "accel_set_options", 00:17:16.139 "params": { 00:17:16.139 "small_cache_size": 128, 00:17:16.139 "large_cache_size": 16, 00:17:16.139 "task_count": 2048, 00:17:16.139 "sequence_count": 2048, 00:17:16.139 "buf_count": 2048 00:17:16.139 } 00:17:16.139 } 00:17:16.139 ] 00:17:16.139 }, 00:17:16.139 { 00:17:16.139 "subsystem": "bdev", 00:17:16.139 "config": [ 00:17:16.139 { 00:17:16.139 "method": "bdev_set_options", 00:17:16.139 "params": { 00:17:16.139 "bdev_io_pool_size": 65535, 00:17:16.139 "bdev_io_cache_size": 256, 00:17:16.139 "bdev_auto_examine": true, 00:17:16.139 "iobuf_small_cache_size": 128, 00:17:16.139 "iobuf_large_cache_size": 16 00:17:16.140 } 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "method": "bdev_raid_set_options", 00:17:16.140 "params": { 00:17:16.140 "process_window_size_kb": 1024, 00:17:16.140 "process_max_bandwidth_mb_sec": 0 00:17:16.140 } 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "method": "bdev_iscsi_set_options", 00:17:16.140 "params": { 00:17:16.140 "timeout_sec": 30 00:17:16.140 } 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "method": "bdev_nvme_set_options", 00:17:16.140 "params": { 00:17:16.140 "action_on_timeout": "none", 00:17:16.140 "timeout_us": 0, 00:17:16.140 "timeout_admin_us": 0, 00:17:16.140 "keep_alive_timeout_ms": 10000, 00:17:16.140 "arbitration_burst": 0, 00:17:16.140 "low_priority_weight": 0, 00:17:16.140 "medium_priority_weight": 0, 00:17:16.140 "high_priority_weight": 0, 00:17:16.140 "nvme_adminq_poll_period_us": 10000, 00:17:16.140 "nvme_ioq_poll_period_us": 0, 00:17:16.140 "io_queue_requests": 512, 00:17:16.140 "delay_cmd_submit": true, 00:17:16.140 "transport_retry_count": 4, 00:17:16.140 "bdev_retry_count": 3, 00:17:16.140 "transport_ack_timeout": 0, 00:17:16.140 "ctrlr_loss_timeout_sec": 0, 00:17:16.140 "reconnect_delay_sec": 0, 00:17:16.140 "fast_io_fail_timeout_sec": 0, 00:17:16.140 "disable_auto_failback": false, 00:17:16.140 "generate_uuids": false, 00:17:16.140 "transport_tos": 0, 00:17:16.140 "nvme_error_stat": false, 00:17:16.140 "rdma_srq_size": 0, 00:17:16.140 "io_path_stat": false, 00:17:16.140 "allow_accel_sequence": false, 00:17:16.140 "rdma_max_cq_size": 0, 00:17:16.140 "rdma_cm_event_timeout_ms": 0, 00:17:16.140 "dhchap_digests": [ 00:17:16.140 "sha256", 00:17:16.140 "sha384", 00:17:16.140 "sha512" 00:17:16.140 ], 00:17:16.140 "dhchap_dhgroups": [ 00:17:16.140 "null", 00:17:16.140 "ffdhe2048", 00:17:16.140 "ffdhe3072", 00:17:16.140 "ffdhe4096", 00:17:16.140 "ffdhe6144", 00:17:16.140 "ffdhe8192" 00:17:16.140 ] 00:17:16.140 } 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "method": "bdev_nvme_attach_controller", 00:17:16.140 "params": { 00:17:16.140 "name": "nvme0", 00:17:16.140 "trtype": "TCP", 00:17:16.140 "adrfam": "IPv4", 00:17:16.140 "traddr": "10.0.0.3", 00:17:16.140 "trsvcid": "4420", 00:17:16.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.140 "prchk_reftag": false, 00:17:16.140 "prchk_guard": false, 00:17:16.140 "ctrlr_loss_timeout_sec": 0, 00:17:16.140 "reconnect_delay_sec": 0, 00:17:16.140 "fast_io_fail_timeout_sec": 0, 00:17:16.140 "psk": "key0", 00:17:16.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.140 "hdgst": false, 00:17:16.140 "ddgst": false, 00:17:16.140 "multipath": "multipath" 00:17:16.140 } 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "method": "bdev_nvme_set_hotplug", 00:17:16.140 "params": { 00:17:16.140 "period_us": 100000, 00:17:16.140 "enable": false 00:17:16.140 } 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "method": "bdev_enable_histogram", 00:17:16.140 "params": { 00:17:16.140 "name": "nvme0n1", 00:17:16.140 "enable": true 00:17:16.140 } 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "method": "bdev_wait_for_examine" 00:17:16.140 } 00:17:16.140 ] 00:17:16.140 }, 00:17:16.140 { 00:17:16.140 "subsystem": "nbd", 00:17:16.140 "config": [] 00:17:16.140 } 00:17:16.140 ] 00:17:16.140 }' 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 73201 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73201 ']' 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73201 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73201 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:16.140 killing process with pid 73201 00:17:16.140 Received shutdown signal, test time was about 1.000000 seconds 00:17:16.140 00:17:16.140 Latency(us) 00:17:16.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.140 =================================================================================================================== 00:17:16.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73201' 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73201 00:17:16.140 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73201 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 73175 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73175 ']' 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73175 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73175 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.398 killing process with pid 73175 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73175' 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73175 00:17:16.398 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73175 00:17:16.658 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:16.658 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:16.658 "subsystems": [ 00:17:16.658 { 00:17:16.658 "subsystem": "keyring", 00:17:16.658 "config": [ 00:17:16.658 { 00:17:16.658 "method": "keyring_file_add_key", 00:17:16.658 "params": { 00:17:16.658 "name": "key0", 00:17:16.658 "path": "/tmp/tmp.QoaiFeB8BL" 00:17:16.658 } 00:17:16.658 } 00:17:16.658 ] 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "subsystem": "iobuf", 00:17:16.658 "config": [ 00:17:16.658 { 00:17:16.658 "method": "iobuf_set_options", 00:17:16.658 "params": { 00:17:16.658 "small_pool_count": 8192, 00:17:16.658 "large_pool_count": 1024, 00:17:16.658 "small_bufsize": 8192, 00:17:16.658 "large_bufsize": 135168 00:17:16.658 } 00:17:16.658 } 00:17:16.658 ] 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "subsystem": "sock", 00:17:16.658 "config": [ 00:17:16.658 { 00:17:16.658 "method": "sock_set_default_impl", 00:17:16.658 "params": { 00:17:16.658 "impl_name": "uring" 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "sock_impl_set_options", 00:17:16.658 "params": { 00:17:16.658 "impl_name": "ssl", 00:17:16.658 "recv_buf_size": 4096, 00:17:16.658 "send_buf_size": 4096, 00:17:16.658 "enable_recv_pipe": true, 00:17:16.658 "enable_quickack": false, 00:17:16.658 "enable_placement_id": 0, 00:17:16.658 "enable_zerocopy_send_server": true, 00:17:16.658 "enable_zerocopy_send_client": false, 00:17:16.658 "zerocopy_threshold": 0, 00:17:16.658 "tls_version": 0, 00:17:16.658 "enable_ktls": false 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "sock_impl_set_options", 00:17:16.658 "params": { 00:17:16.658 "impl_name": "posix", 00:17:16.658 "recv_buf_size": 2097152, 00:17:16.658 "send_buf_size": 2097152, 00:17:16.658 "enable_recv_pipe": true, 00:17:16.658 "enable_quickack": false, 00:17:16.658 "enable_placement_id": 0, 00:17:16.658 "enable_zerocopy_send_server": true, 00:17:16.658 "enable_zerocopy_send_client": false, 00:17:16.658 "zerocopy_threshold": 0, 00:17:16.658 "tls_version": 0, 00:17:16.658 "enable_ktls": false 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "sock_impl_set_options", 00:17:16.658 "params": { 00:17:16.658 "impl_name": "uring", 00:17:16.658 "recv_buf_size": 2097152, 00:17:16.658 "send_buf_size": 2097152, 00:17:16.658 "enable_recv_pipe": true, 00:17:16.658 "enable_quickack": false, 00:17:16.658 "enable_placement_id": 0, 00:17:16.658 "enable_zerocopy_send_server": false, 00:17:16.658 "enable_zerocopy_send_client": false, 00:17:16.658 "zerocopy_threshold": 0, 00:17:16.658 "tls_version": 0, 00:17:16.658 "enable_ktls": false 00:17:16.658 } 00:17:16.658 } 00:17:16.658 ] 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "subsystem": "vmd", 00:17:16.658 "config": [] 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "subsystem": "accel", 00:17:16.658 "config": [ 00:17:16.658 { 00:17:16.658 "method": "accel_set_options", 00:17:16.658 "params": { 00:17:16.658 "small_cache_size": 128, 00:17:16.658 "large_cache_size": 16, 00:17:16.658 "task_count": 2048, 00:17:16.658 "sequence_count": 2048, 00:17:16.658 "buf_count": 2048 00:17:16.658 } 00:17:16.658 } 00:17:16.658 ] 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "subsystem": "bdev", 00:17:16.658 "config": [ 00:17:16.658 { 00:17:16.658 "method": "bdev_set_options", 00:17:16.658 "params": { 00:17:16.658 "bdev_io_pool_size": 65535, 00:17:16.658 "bdev_io_cache_size": 256, 00:17:16.658 "bdev_auto_examine": true, 00:17:16.658 "iobuf_small_cache_size": 128, 00:17:16.658 "iobuf_large_cache_size": 16 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "bdev_raid_set_options", 00:17:16.658 "params": { 00:17:16.658 "process_window_size_kb": 1024, 00:17:16.658 "process_max_bandwidth_mb_sec": 0 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "bdev_iscsi_set_options", 00:17:16.658 "params": { 00:17:16.658 "timeout_sec": 30 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "bdev_nvme_set_options", 00:17:16.658 "params": { 00:17:16.658 "action_on_timeout": "none", 00:17:16.658 "timeout_us": 0, 00:17:16.658 "timeout_admin_us": 0, 00:17:16.658 "keep_alive_timeout_ms": 10000, 00:17:16.658 "arbitration_burst": 0, 00:17:16.658 "low_priority_weight": 0, 00:17:16.658 "medium_priority_weight": 0, 00:17:16.658 "high_priority_weight": 0, 00:17:16.658 "nvme_adminq_poll_period_us": 10000, 00:17:16.658 "nvme_ioq_poll_period_us": 0, 00:17:16.658 "io_queue_requests": 0, 00:17:16.658 "delay_cmd_submit": true, 00:17:16.658 "transport_retry_count": 4, 00:17:16.658 "bdev_retry_count": 3, 00:17:16.658 "transport_ack_timeout": 0, 00:17:16.658 "ctrlr_loss_timeout_sec": 0, 00:17:16.658 "reconnect_delay_sec": 0, 00:17:16.658 "fast_io_fail_timeout_sec": 0, 00:17:16.658 "disable_auto_failback": false, 00:17:16.658 "generate_uuids": false, 00:17:16.658 "transport_tos": 0, 00:17:16.658 "nvme_error_stat": false, 00:17:16.658 "rdma_srq_size": 0, 00:17:16.658 "io_path_stat": false, 00:17:16.658 "allow_accel_sequence": false, 00:17:16.658 "rdma_max_cq_size": 0, 00:17:16.658 "rdma_cm_event_timeout_ms": 0, 00:17:16.658 "dhchap_digests": [ 00:17:16.658 "sha256", 00:17:16.658 "sha384", 00:17:16.658 "sha512" 00:17:16.658 ], 00:17:16.658 "dhchap_dhgroups": [ 00:17:16.658 "null", 00:17:16.658 "ffdhe2048", 00:17:16.658 "ffdhe3072", 00:17:16.658 "ffdhe4096", 00:17:16.658 "ffdhe6144", 00:17:16.658 "ffdhe8192" 00:17:16.658 ] 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "bdev_nvme_set_hotplug", 00:17:16.658 "params": { 00:17:16.658 "period_us": 100000, 00:17:16.658 "enable": false 00:17:16.658 } 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "method": "bdev_malloc_create", 00:17:16.658 "params": { 00:17:16.659 "name": "malloc0", 00:17:16.659 "num_blocks": 8192, 00:17:16.659 "block_size": 4096, 00:17:16.659 "physical_block_size": 4096, 00:17:16.659 "uuid": "fdd17011-4562-437e-a97a-d964229d6d01", 00:17:16.659 "optimal_io_boundary": 0, 00:17:16.659 "md_size": 0, 00:17:16.659 "dif_type": 0, 00:17:16.659 "dif_is_head_of_md": false, 00:17:16.659 "dif_pi_format": 0 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "bdev_wait_for_examine" 00:17:16.659 } 00:17:16.659 ] 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "subsystem": "nbd", 00:17:16.659 "config": [] 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "subsystem": "scheduler", 00:17:16.659 "config": [ 00:17:16.659 { 00:17:16.659 "method": "framework_set_scheduler", 00:17:16.659 "params": { 00:17:16.659 "name": "static" 00:17:16.659 } 00:17:16.659 } 00:17:16.659 ] 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "subsystem": "nvmf", 00:17:16.659 "config": [ 00:17:16.659 { 00:17:16.659 "method": "nvmf_set_config", 00:17:16.659 "params": { 00:17:16.659 "discovery_filter": "match_any", 00:17:16.659 "admin_cmd_passthru": { 00:17:16.659 "identify_ctrlr": false 00:17:16.659 }, 00:17:16.659 "dhchap_digests": [ 00:17:16.659 "sha256", 00:17:16.659 "sha384", 00:17:16.659 "sha512" 00:17:16.659 ], 00:17:16.659 "dhchap_dhgroups": [ 00:17:16.659 "null", 00:17:16.659 "ffdhe2048", 00:17:16.659 "ffdhe3072", 00:17:16.659 "ffdhe4096", 00:17:16.659 "ffdhe6144", 00:17:16.659 "ffdhe8192" 00:17:16.659 ] 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "nvmf_set_max_subsystems", 00:17:16.659 "params": { 00:17:16.659 "max_subsystems": 1024 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "nvmf_set_crdt", 00:17:16.659 "params": { 00:17:16.659 "crdt1": 0, 00:17:16.659 "crdt2": 0, 00:17:16.659 "crdt3": 0 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "nvmf_create_transport", 00:17:16.659 "params": { 00:17:16.659 "trtype": "TCP", 00:17:16.659 "max_queue_depth": 128, 00:17:16.659 "max_io_qpairs_per_ctrlr": 127, 00:17:16.659 "in_capsule_data_size": 4096, 00:17:16.659 "max_io_size": 131072, 00:17:16.659 "io_unit_size": 131072, 00:17:16.659 "max_aq_depth": 128, 00:17:16.659 "num_shared_buffers": 511, 00:17:16.659 "buf_cache_size": 4294967295, 00:17:16.659 "dif_insert_or_strip": false, 00:17:16.659 "zcopy": false, 00:17:16.659 "c2h_success": false, 00:17:16.659 "sock_priority": 0, 00:17:16.659 "abort_timeout_sec": 1, 00:17:16.659 "ack_timeout": 0, 00:17:16.659 "data_wr_pool_size": 0 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "nvmf_create_subsystem", 00:17:16.659 "params": { 00:17:16.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.659 "allow_any_host": false, 00:17:16.659 "serial_number": "00000000000000000000", 00:17:16.659 "model_number": "SPDK bdev Controller", 00:17:16.659 "max_namespaces": 32, 00:17:16.659 "min_cntlid": 1, 00:17:16.659 "max_cntlid": 65519, 00:17:16.659 "ana_reporting": false 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "nvmf_subsystem_add_host", 00:17:16.659 "params": { 00:17:16.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.659 "host": "nqn.2016-06.io.spdk:host1", 00:17:16.659 "psk": "key0" 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "nvmf_subsystem_add_ns", 00:17:16.659 "params": { 00:17:16.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.659 "namespace": { 00:17:16.659 "nsid": 1, 00:17:16.659 "bdev_name": "malloc0", 00:17:16.659 "nguid": "FDD170114562437EA97AD964229D6D01", 00:17:16.659 "uuid": "fdd17011-4562-437e-a97a-d964229d6d01", 00:17:16.659 "no_auto_visible": false 00:17:16.659 } 00:17:16.659 } 00:17:16.659 }, 00:17:16.659 { 00:17:16.659 "method": "nvmf_subsystem_add_listener", 00:17:16.659 "params": { 00:17:16.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.659 "listen_address": { 00:17:16.659 "trtype": "TCP", 00:17:16.659 "adrfam": "IPv4", 00:17:16.659 "traddr": "10.0.0.3", 00:17:16.659 "trsvcid": "4420" 00:17:16.659 }, 00:17:16.659 "secure_channel": false, 00:17:16.659 "sock_impl": "ssl" 00:17:16.659 } 00:17:16.659 } 00:17:16.659 ] 00:17:16.659 } 00:17:16.659 ] 00:17:16.659 }' 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=73254 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 73254 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73254 ']' 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.918 [2024-10-01 13:51:26.845182] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:16.918 [2024-10-01 13:51:26.845289] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.918 [2024-10-01 13:51:26.987960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.176 [2024-10-01 13:51:27.169177] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.176 [2024-10-01 13:51:27.169259] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.176 [2024-10-01 13:51:27.169286] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.176 [2024-10-01 13:51:27.169296] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.176 [2024-10-01 13:51:27.169305] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.176 [2024-10-01 13:51:27.169447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.434 [2024-10-01 13:51:27.363448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.434 [2024-10-01 13:51:27.465810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.434 [2024-10-01 13:51:27.510112] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.434 [2024-10-01 13:51:27.510618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:17.693 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.693 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:17.693 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:17.693 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:17.693 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=73286 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 73286 /var/tmp/bdevperf.sock 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73286 ']' 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:17.951 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:17.951 "subsystems": [ 00:17:17.951 { 00:17:17.951 "subsystem": "keyring", 00:17:17.951 "config": [ 00:17:17.951 { 00:17:17.951 "method": "keyring_file_add_key", 00:17:17.951 "params": { 00:17:17.951 "name": "key0", 00:17:17.951 "path": "/tmp/tmp.QoaiFeB8BL" 00:17:17.951 } 00:17:17.951 } 00:17:17.951 ] 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "subsystem": "iobuf", 00:17:17.951 "config": [ 00:17:17.951 { 00:17:17.951 "method": "iobuf_set_options", 00:17:17.951 "params": { 00:17:17.951 "small_pool_count": 8192, 00:17:17.951 "large_pool_count": 1024, 00:17:17.951 "small_bufsize": 8192, 00:17:17.951 "large_bufsize": 135168 00:17:17.951 } 00:17:17.951 } 00:17:17.951 ] 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "subsystem": "sock", 00:17:17.951 "config": [ 00:17:17.951 { 00:17:17.951 "method": "sock_set_default_impl", 00:17:17.951 "params": { 00:17:17.951 "impl_name": "uring" 00:17:17.951 } 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "method": "sock_impl_set_options", 00:17:17.951 "params": { 00:17:17.951 "impl_name": "ssl", 00:17:17.951 "recv_buf_size": 4096, 00:17:17.951 "send_buf_size": 4096, 00:17:17.951 "enable_recv_pipe": true, 00:17:17.951 "enable_quickack": false, 00:17:17.951 "enable_placement_id": 0, 00:17:17.951 "enable_zerocopy_send_server": true, 00:17:17.951 "enable_zerocopy_send_client": false, 00:17:17.951 "zerocopy_threshold": 0, 00:17:17.951 "tls_version": 0, 00:17:17.951 "enable_ktls": false 00:17:17.951 } 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "method": "sock_impl_set_options", 00:17:17.951 "params": { 00:17:17.951 "impl_name": "posix", 00:17:17.951 "recv_buf_size": 2097152, 00:17:17.951 "send_buf_size": 2097152, 00:17:17.951 "enable_recv_pipe": true, 00:17:17.951 "enable_quickack": false, 00:17:17.951 "enable_placement_id": 0, 00:17:17.951 "enable_zerocopy_send_server": true, 00:17:17.951 "enable_zerocopy_send_client": false, 00:17:17.951 "zerocopy_threshold": 0, 00:17:17.951 "tls_version": 0, 00:17:17.951 "enable_ktls": false 00:17:17.951 } 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "method": "sock_impl_set_options", 00:17:17.951 "params": { 00:17:17.951 "impl_name": "uring", 00:17:17.951 "recv_buf_size": 2097152, 00:17:17.951 "send_buf_size": 2097152, 00:17:17.951 "enable_recv_pipe": true, 00:17:17.951 "enable_quickack": false, 00:17:17.951 "enable_placement_id": 0, 00:17:17.951 "enable_zerocopy_send_server": false, 00:17:17.951 "enable_zerocopy_send_client": false, 00:17:17.951 "zerocopy_threshold": 0, 00:17:17.951 "tls_version": 0, 00:17:17.951 "enable_ktls": false 00:17:17.951 } 00:17:17.951 } 00:17:17.951 ] 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "subsystem": "vmd", 00:17:17.951 "config": [] 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "subsystem": "accel", 00:17:17.951 "config": [ 00:17:17.951 { 00:17:17.951 "method": "accel_set_options", 00:17:17.951 "params": { 00:17:17.951 "small_cache_size": 128, 00:17:17.951 "large_cache_size": 16, 00:17:17.951 "task_count": 2048, 00:17:17.951 "sequence_count": 2048, 00:17:17.951 "buf_count": 2048 00:17:17.951 } 00:17:17.951 } 00:17:17.951 ] 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "subsystem": "bdev", 00:17:17.951 "config": [ 00:17:17.951 { 00:17:17.951 "method": "bdev_set_options", 00:17:17.951 "params": { 00:17:17.951 "bdev_io_pool_size": 65535, 00:17:17.951 "bdev_io_cache_size": 256, 00:17:17.951 "bdev_auto_examine": true, 00:17:17.951 "iobuf_small_cache_size": 128, 00:17:17.951 "iobuf_large_cache_size": 16 00:17:17.951 } 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "method": "bdev_raid_set_options", 00:17:17.951 "params": { 00:17:17.951 "process_window_size_kb": 1024, 00:17:17.951 "process_max_bandwidth_mb_sec": 0 00:17:17.951 } 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "method": "bdev_iscsi_set_options", 00:17:17.951 "params": { 00:17:17.951 "timeout_sec": 30 00:17:17.951 } 00:17:17.951 }, 00:17:17.951 { 00:17:17.951 "method": "bdev_nvme_set_options", 00:17:17.951 "params": { 00:17:17.951 "action_on_timeout": "none", 00:17:17.951 "timeout_us": 0, 00:17:17.951 "timeout_admin_us": 0, 00:17:17.951 "keep_alive_timeout_ms": 10000, 00:17:17.951 "arbitration_burst": 0, 00:17:17.951 "low_priority_weight": 0, 00:17:17.951 "medium_priority_weight": 0, 00:17:17.951 "high_priority_weight": 0, 00:17:17.951 "nvme_adminq_poll_period_us": 10000, 00:17:17.951 "nvme_ioq_poll_period_us": 0, 00:17:17.951 "io_queue_requests": 512, 00:17:17.951 "delay_cmd_submit": true, 00:17:17.951 "transport_retry_count": 4, 00:17:17.951 "bdev_retry_count": 3, 00:17:17.951 "transport_ack_timeout": 0, 00:17:17.951 "ctrlr_loss_timeout_sec": 0, 00:17:17.951 "reconnect_delay_sec": 0, 00:17:17.951 "fast_io_fail_timeout_sec": 0, 00:17:17.951 "disable_auto_failback": false, 00:17:17.951 "generate_uuids": false, 00:17:17.951 "transport_tos": 0, 00:17:17.951 "nvme_error_stat": false, 00:17:17.951 "rdma_srq_size": 0, 00:17:17.951 "io_path_stat": false, 00:17:17.951 "allow_accel_sequence": false, 00:17:17.951 "rdma_max_cq_size": 0, 00:17:17.951 "rdma_cm_event_timeout_ms": 0, 00:17:17.951 "dhchap_digests": [ 00:17:17.951 "sha256", 00:17:17.951 "sha384", 00:17:17.951 "sha512" 00:17:17.951 ], 00:17:17.951 "dhchap_dhgroups": [ 00:17:17.951 "null", 00:17:17.951 "ffdhe2048", 00:17:17.952 "ffdhe3072", 00:17:17.952 "ffdhe4096", 00:17:17.952 "ffdhe6144", 00:17:17.952 "ffdhe8192" 00:17:17.952 ] 00:17:17.952 } 00:17:17.952 }, 00:17:17.952 { 00:17:17.952 "method": "bdev_nvme_attach_controller", 00:17:17.952 "params": { 00:17:17.952 "name": "nvme0", 00:17:17.952 "trtype": "TCP", 00:17:17.952 "adrfam": "IPv4", 00:17:17.952 "traddr": "10.0.0.3", 00:17:17.952 "trsvcid": "4420", 00:17:17.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.952 "prchk_reftag": false, 00:17:17.952 "prchk_guard": false, 00:17:17.952 "ctrlr_loss_timeout_sec": 0, 00:17:17.952 "reconnect_delay_sec": 0, 00:17:17.952 "fast_io_fail_timeout_sec": 0, 00:17:17.952 "psk": "key0", 00:17:17.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.952 "hdgst": false, 00:17:17.952 "ddgst": false, 00:17:17.952 "multipath": "multipath" 00:17:17.952 } 00:17:17.952 }, 00:17:17.952 { 00:17:17.952 "method": "bdev_nvme_set_hotplug", 00:17:17.952 "params": { 00:17:17.952 "period_us": 100000, 00:17:17.952 "enable": false 00:17:17.952 } 00:17:17.952 }, 00:17:17.952 { 00:17:17.952 "method": "bdev_enable_histogram", 00:17:17.952 "params": { 00:17:17.952 "name": "nvme0n1", 00:17:17.952 "enable": true 00:17:17.952 } 00:17:17.952 }, 00:17:17.952 { 00:17:17.952 "method": "bdev_wait_for_examine" 00:17:17.952 } 00:17:17.952 ] 00:17:17.952 }, 00:17:17.952 { 00:17:17.952 "subsystem": "nbd", 00:17:17.952 "config": [] 00:17:17.952 } 00:17:17.952 ] 00:17:17.952 }' 00:17:17.952 [2024-10-01 13:51:27.962366] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:17.952 [2024-10-01 13:51:27.962470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:17:17.952 [2024-10-01 13:51:28.101636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.209 [2024-10-01 13:51:28.220959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.209 [2024-10-01 13:51:28.359251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:18.518 [2024-10-01 13:51:28.409472] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.087 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.087 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:19.087 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:19.087 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:19.345 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.345 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.345 Running I/O for 1 seconds... 00:17:20.277 4020.00 IOPS, 15.70 MiB/s 00:17:20.277 Latency(us) 00:17:20.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.277 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:20.277 Verification LBA range: start 0x0 length 0x2000 00:17:20.277 nvme0n1 : 1.02 4062.18 15.87 0.00 0.00 31100.81 5510.98 24307.90 00:17:20.277 =================================================================================================================== 00:17:20.277 Total : 4062.18 15.87 0.00 0.00 31100.81 5510.98 24307.90 00:17:20.277 { 00:17:20.277 "results": [ 00:17:20.277 { 00:17:20.277 "job": "nvme0n1", 00:17:20.277 "core_mask": "0x2", 00:17:20.277 "workload": "verify", 00:17:20.277 "status": "finished", 00:17:20.277 "verify_range": { 00:17:20.277 "start": 0, 00:17:20.277 "length": 8192 00:17:20.277 }, 00:17:20.277 "queue_depth": 128, 00:17:20.277 "io_size": 4096, 00:17:20.277 "runtime": 1.021374, 00:17:20.277 "iops": 4062.1750700526936, 00:17:20.277 "mibps": 15.867871367393334, 00:17:20.277 "io_failed": 0, 00:17:20.277 "io_timeout": 0, 00:17:20.277 "avg_latency_us": 31100.80531475273, 00:17:20.277 "min_latency_us": 5510.981818181818, 00:17:20.277 "max_latency_us": 24307.898181818182 00:17:20.277 } 00:17:20.277 ], 00:17:20.277 "core_count": 1 00:17:20.277 } 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:17:20.277 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:20.277 nvmf_trace.0 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73286 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73286 ']' 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73286 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73286 00:17:20.534 killing process with pid 73286 00:17:20.534 Received shutdown signal, test time was about 1.000000 seconds 00:17:20.534 00:17:20.534 Latency(us) 00:17:20.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.534 =================================================================================================================== 00:17:20.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73286' 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73286 00:17:20.534 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73286 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.791 rmmod nvme_tcp 00:17:20.791 rmmod nvme_fabrics 00:17:20.791 rmmod nvme_keyring 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 73254 ']' 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 73254 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73254 ']' 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73254 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73254 00:17:20.791 killing process with pid 73254 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73254' 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73254 00:17:20.791 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73254 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:21.049 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qpQnjnEcNq /tmp/tmp.PK6msyk05u /tmp/tmp.QoaiFeB8BL 00:17:21.307 ************************************ 00:17:21.307 END TEST nvmf_tls 00:17:21.307 ************************************ 00:17:21.307 00:17:21.307 real 1m35.644s 00:17:21.307 user 2m35.535s 00:17:21.307 sys 0m30.915s 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.307 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.568 ************************************ 00:17:21.568 START TEST nvmf_fips 00:17:21.568 ************************************ 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:21.568 * Looking for test storage... 00:17:21.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:21.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:21.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.569 --rc genhtml_branch_coverage=1 00:17:21.569 --rc genhtml_function_coverage=1 00:17:21.569 --rc genhtml_legend=1 00:17:21.569 --rc geninfo_all_blocks=1 00:17:21.569 --rc geninfo_unexecuted_blocks=1 00:17:21.569 00:17:21.569 ' 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:21.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.569 --rc genhtml_branch_coverage=1 00:17:21.569 --rc genhtml_function_coverage=1 00:17:21.569 --rc genhtml_legend=1 00:17:21.569 --rc geninfo_all_blocks=1 00:17:21.569 --rc geninfo_unexecuted_blocks=1 00:17:21.569 00:17:21.569 ' 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:21.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.569 --rc genhtml_branch_coverage=1 00:17:21.569 --rc genhtml_function_coverage=1 00:17:21.569 --rc genhtml_legend=1 00:17:21.569 --rc geninfo_all_blocks=1 00:17:21.569 --rc geninfo_unexecuted_blocks=1 00:17:21.569 00:17:21.569 ' 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:21.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.569 --rc genhtml_branch_coverage=1 00:17:21.569 --rc genhtml_function_coverage=1 00:17:21.569 --rc genhtml_legend=1 00:17:21.569 --rc geninfo_all_blocks=1 00:17:21.569 --rc geninfo_unexecuted_blocks=1 00:17:21.569 00:17:21.569 ' 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.569 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.828 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:21.829 Error setting digest 00:17:21.829 40624E96947F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:21.829 40624E96947F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:21.829 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:21.830 Cannot find device "nvmf_init_br" 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:21.830 Cannot find device "nvmf_init_br2" 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:21.830 Cannot find device "nvmf_tgt_br" 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.830 Cannot find device "nvmf_tgt_br2" 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:21.830 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:21.830 Cannot find device "nvmf_init_br" 00:17:21.830 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:21.830 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:22.088 Cannot find device "nvmf_init_br2" 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:22.088 Cannot find device "nvmf_tgt_br" 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:22.088 Cannot find device "nvmf_tgt_br2" 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:22.088 Cannot find device "nvmf_br" 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:22.088 Cannot find device "nvmf_init_if" 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:22.088 Cannot find device "nvmf_init_if2" 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.088 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:22.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:17:22.345 00:17:22.345 --- 10.0.0.3 ping statistics --- 00:17:22.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.345 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:22.345 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:22.345 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:17:22.345 00:17:22.345 --- 10.0.0.4 ping statistics --- 00:17:22.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.345 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:22.345 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:22.346 00:17:22.346 --- 10.0.0.1 ping statistics --- 00:17:22.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.346 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:22.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:22.346 00:17:22.346 --- 10.0.0.2 ping statistics --- 00:17:22.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.346 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=73623 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 73623 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73623 ']' 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.346 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.346 [2024-10-01 13:51:32.465280] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:22.346 [2024-10-01 13:51:32.465724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.603 [2024-10-01 13:51:32.606082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.604 [2024-10-01 13:51:32.736060] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.604 [2024-10-01 13:51:32.736125] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.604 [2024-10-01 13:51:32.736138] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.604 [2024-10-01 13:51:32.736146] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.604 [2024-10-01 13:51:32.736154] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.604 [2024-10-01 13:51:32.736183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.861 [2024-10-01 13:51:32.795717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:23.427 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Z4d 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Z4d 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Z4d 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Z4d 00:17:23.428 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.686 [2024-10-01 13:51:33.861030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.944 [2024-10-01 13:51:33.876986] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.944 [2024-10-01 13:51:33.877478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:23.944 malloc0 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73659 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73659 /var/tmp/bdevperf.sock 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73659 ']' 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.944 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:23.944 [2024-10-01 13:51:34.044360] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:23.944 [2024-10-01 13:51:34.044484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73659 ] 00:17:24.221 [2024-10-01 13:51:34.186936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.221 [2024-10-01 13:51:34.313455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.221 [2024-10-01 13:51:34.369353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:25.170 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.170 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:17:25.170 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Z4d 00:17:25.429 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:25.687 [2024-10-01 13:51:35.625504] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:25.687 TLSTESTn1 00:17:25.687 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:25.687 Running I/O for 10 seconds... 00:17:35.961 3561.00 IOPS, 13.91 MiB/s 3664.50 IOPS, 14.31 MiB/s 3736.00 IOPS, 14.59 MiB/s 3763.50 IOPS, 14.70 MiB/s 3783.00 IOPS, 14.78 MiB/s 3797.67 IOPS, 14.83 MiB/s 3788.57 IOPS, 14.80 MiB/s 3791.88 IOPS, 14.81 MiB/s 3789.11 IOPS, 14.80 MiB/s 3792.40 IOPS, 14.81 MiB/s 00:17:35.961 Latency(us) 00:17:35.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.961 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:35.961 Verification LBA range: start 0x0 length 0x2000 00:17:35.961 TLSTESTn1 : 10.02 3798.06 14.84 0.00 0.00 33637.06 6553.60 32648.84 00:17:35.961 =================================================================================================================== 00:17:35.961 Total : 3798.06 14.84 0.00 0.00 33637.06 6553.60 32648.84 00:17:35.961 { 00:17:35.961 "results": [ 00:17:35.961 { 00:17:35.961 "job": "TLSTESTn1", 00:17:35.961 "core_mask": "0x4", 00:17:35.961 "workload": "verify", 00:17:35.961 "status": "finished", 00:17:35.961 "verify_range": { 00:17:35.961 "start": 0, 00:17:35.961 "length": 8192 00:17:35.961 }, 00:17:35.961 "queue_depth": 128, 00:17:35.961 "io_size": 4096, 00:17:35.961 "runtime": 10.018543, 00:17:35.961 "iops": 3798.057262418298, 00:17:35.961 "mibps": 14.836161181321476, 00:17:35.961 "io_failed": 0, 00:17:35.961 "io_timeout": 0, 00:17:35.961 "avg_latency_us": 33637.06194815092, 00:17:35.961 "min_latency_us": 6553.6, 00:17:35.961 "max_latency_us": 32648.843636363636 00:17:35.961 } 00:17:35.961 ], 00:17:35.961 "core_count": 1 00:17:35.961 } 00:17:35.961 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:35.961 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:35.962 nvmf_trace.0 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73659 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73659 ']' 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73659 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.962 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73659 00:17:35.962 killing process with pid 73659 00:17:35.962 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.962 00:17:35.962 Latency(us) 00:17:35.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.962 =================================================================================================================== 00:17:35.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.962 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:35.962 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:35.962 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73659' 00:17:35.962 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73659 00:17:35.962 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73659 00:17:36.219 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:36.219 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:36.219 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.477 rmmod nvme_tcp 00:17:36.477 rmmod nvme_fabrics 00:17:36.477 rmmod nvme_keyring 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 73623 ']' 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 73623 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73623 ']' 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73623 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73623 00:17:36.477 killing process with pid 73623 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73623' 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73623 00:17:36.477 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73623 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:36.735 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:36.995 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.995 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.995 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:36.995 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.995 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.995 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Z4d 00:17:36.995 ************************************ 00:17:36.995 END TEST nvmf_fips 00:17:36.995 ************************************ 00:17:36.995 00:17:36.995 real 0m15.499s 00:17:36.995 user 0m21.344s 00:17:36.995 sys 0m6.164s 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.995 ************************************ 00:17:36.995 START TEST nvmf_control_msg_list 00:17:36.995 ************************************ 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:36.995 * Looking for test storage... 00:17:36.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:17:36.995 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.253 --rc genhtml_branch_coverage=1 00:17:37.253 --rc genhtml_function_coverage=1 00:17:37.253 --rc genhtml_legend=1 00:17:37.253 --rc geninfo_all_blocks=1 00:17:37.253 --rc geninfo_unexecuted_blocks=1 00:17:37.253 00:17:37.253 ' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.253 --rc genhtml_branch_coverage=1 00:17:37.253 --rc genhtml_function_coverage=1 00:17:37.253 --rc genhtml_legend=1 00:17:37.253 --rc geninfo_all_blocks=1 00:17:37.253 --rc geninfo_unexecuted_blocks=1 00:17:37.253 00:17:37.253 ' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.253 --rc genhtml_branch_coverage=1 00:17:37.253 --rc genhtml_function_coverage=1 00:17:37.253 --rc genhtml_legend=1 00:17:37.253 --rc geninfo_all_blocks=1 00:17:37.253 --rc geninfo_unexecuted_blocks=1 00:17:37.253 00:17:37.253 ' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.253 --rc genhtml_branch_coverage=1 00:17:37.253 --rc genhtml_function_coverage=1 00:17:37.253 --rc genhtml_legend=1 00:17:37.253 --rc geninfo_all_blocks=1 00:17:37.253 --rc geninfo_unexecuted_blocks=1 00:17:37.253 00:17:37.253 ' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.253 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:37.254 Cannot find device "nvmf_init_br" 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:37.254 Cannot find device "nvmf_init_br2" 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:37.254 Cannot find device "nvmf_tgt_br" 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.254 Cannot find device "nvmf_tgt_br2" 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:37.254 Cannot find device "nvmf_init_br" 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:37.254 Cannot find device "nvmf_init_br2" 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:37.254 Cannot find device "nvmf_tgt_br" 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:37.254 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:37.254 Cannot find device "nvmf_tgt_br2" 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:37.513 Cannot find device "nvmf_br" 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:37.513 Cannot find device "nvmf_init_if" 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:37.513 Cannot find device "nvmf_init_if2" 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:37.513 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:37.513 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:17:37.513 00:17:37.513 --- 10.0.0.3 ping statistics --- 00:17:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.513 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:37.513 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:37.513 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:17:37.513 00:17:37.513 --- 10.0.0.4 ping statistics --- 00:17:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.513 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:37.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:17:37.513 00:17:37.513 --- 10.0.0.1 ping statistics --- 00:17:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.513 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:37.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:37.513 00:17:37.513 --- 10.0.0.2 ping statistics --- 00:17:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.513 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.513 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:37.772 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:37.772 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=74064 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 74064 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 74064 ']' 00:17:37.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.773 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:37.773 [2024-10-01 13:51:47.792826] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:37.773 [2024-10-01 13:51:47.793174] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.773 [2024-10-01 13:51:47.933396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.031 [2024-10-01 13:51:48.065948] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.031 [2024-10-01 13:51:48.066017] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.031 [2024-10-01 13:51:48.066032] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.031 [2024-10-01 13:51:48.066043] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.031 [2024-10-01 13:51:48.066053] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.031 [2024-10-01 13:51:48.066085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.031 [2024-10-01 13:51:48.123321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.967 [2024-10-01 13:51:48.945244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.967 Malloc0 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.967 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.967 [2024-10-01 13:51:48.998295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=74096 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=74097 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=74098 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:38.967 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 74096 00:17:39.226 [2024-10-01 13:51:49.182945] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:39.226 [2024-10-01 13:51:49.183539] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:39.226 [2024-10-01 13:51:49.192877] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:40.161 Initializing NVMe Controllers 00:17:40.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:40.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:40.161 Initialization complete. Launching workers. 00:17:40.161 ======================================================== 00:17:40.161 Latency(us) 00:17:40.161 Device Information : IOPS MiB/s Average min max 00:17:40.161 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3426.00 13.38 291.48 171.93 909.06 00:17:40.161 ======================================================== 00:17:40.161 Total : 3426.00 13.38 291.48 171.93 909.06 00:17:40.161 00:17:40.161 Initializing NVMe Controllers 00:17:40.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:40.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:40.161 Initialization complete. Launching workers. 00:17:40.161 ======================================================== 00:17:40.161 Latency(us) 00:17:40.161 Device Information : IOPS MiB/s Average min max 00:17:40.161 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3432.00 13.41 291.01 220.27 782.53 00:17:40.161 ======================================================== 00:17:40.161 Total : 3432.00 13.41 291.01 220.27 782.53 00:17:40.161 00:17:40.161 Initializing NVMe Controllers 00:17:40.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:40.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:40.161 Initialization complete. Launching workers. 00:17:40.161 ======================================================== 00:17:40.161 Latency(us) 00:17:40.161 Device Information : IOPS MiB/s Average min max 00:17:40.161 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3463.00 13.53 288.34 121.12 784.31 00:17:40.161 ======================================================== 00:17:40.161 Total : 3463.00 13.53 288.34 121.12 784.31 00:17:40.161 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 74097 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 74098 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.161 rmmod nvme_tcp 00:17:40.161 rmmod nvme_fabrics 00:17:40.161 rmmod nvme_keyring 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 74064 ']' 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 74064 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 74064 ']' 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 74064 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.161 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74064 00:17:40.419 killing process with pid 74064 00:17:40.419 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.419 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.419 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74064' 00:17:40.419 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 74064 00:17:40.419 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 74064 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:40.677 00:17:40.677 real 0m3.769s 00:17:40.677 user 0m5.908s 00:17:40.677 sys 0m1.393s 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.677 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:40.677 ************************************ 00:17:40.677 END TEST nvmf_control_msg_list 00:17:40.677 ************************************ 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.936 ************************************ 00:17:40.936 START TEST nvmf_wait_for_buf 00:17:40.936 ************************************ 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:40.936 * Looking for test storage... 00:17:40.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:40.936 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:40.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.936 --rc genhtml_branch_coverage=1 00:17:40.936 --rc genhtml_function_coverage=1 00:17:40.936 --rc genhtml_legend=1 00:17:40.936 --rc geninfo_all_blocks=1 00:17:40.936 --rc geninfo_unexecuted_blocks=1 00:17:40.936 00:17:40.936 ' 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:40.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.936 --rc genhtml_branch_coverage=1 00:17:40.936 --rc genhtml_function_coverage=1 00:17:40.936 --rc genhtml_legend=1 00:17:40.936 --rc geninfo_all_blocks=1 00:17:40.936 --rc geninfo_unexecuted_blocks=1 00:17:40.936 00:17:40.936 ' 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:40.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.936 --rc genhtml_branch_coverage=1 00:17:40.936 --rc genhtml_function_coverage=1 00:17:40.936 --rc genhtml_legend=1 00:17:40.936 --rc geninfo_all_blocks=1 00:17:40.936 --rc geninfo_unexecuted_blocks=1 00:17:40.936 00:17:40.936 ' 00:17:40.936 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:40.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.937 --rc genhtml_branch_coverage=1 00:17:40.937 --rc genhtml_function_coverage=1 00:17:40.937 --rc genhtml_legend=1 00:17:40.937 --rc geninfo_all_blocks=1 00:17:40.937 --rc geninfo_unexecuted_blocks=1 00:17:40.937 00:17:40.937 ' 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:40.937 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:41.196 Cannot find device "nvmf_init_br" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:41.196 Cannot find device "nvmf_init_br2" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:41.196 Cannot find device "nvmf_tgt_br" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.196 Cannot find device "nvmf_tgt_br2" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:41.196 Cannot find device "nvmf_init_br" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:41.196 Cannot find device "nvmf_init_br2" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:41.196 Cannot find device "nvmf_tgt_br" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:41.196 Cannot find device "nvmf_tgt_br2" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:41.196 Cannot find device "nvmf_br" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:41.196 Cannot find device "nvmf_init_if" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:41.196 Cannot find device "nvmf_init_if2" 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:41.196 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:41.197 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:41.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:41.455 00:17:41.455 --- 10.0.0.3 ping statistics --- 00:17:41.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.455 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:41.455 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:41.455 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:17:41.455 00:17:41.455 --- 10.0.0.4 ping statistics --- 00:17:41.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.455 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:41.455 00:17:41.455 --- 10.0.0.1 ping statistics --- 00:17:41.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.455 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:41.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:41.455 00:17:41.455 --- 10.0.0.2 ping statistics --- 00:17:41.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.455 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=74334 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 74334 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 74334 ']' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.455 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 [2024-10-01 13:51:51.540404] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:41.456 [2024-10-01 13:51:51.540489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.714 [2024-10-01 13:51:51.675991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.714 [2024-10-01 13:51:51.828748] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.714 [2024-10-01 13:51:51.828837] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.714 [2024-10-01 13:51:51.828853] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.714 [2024-10-01 13:51:51.828864] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.714 [2024-10-01 13:51:51.828874] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.714 [2024-10-01 13:51:51.828909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 [2024-10-01 13:51:52.732696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 Malloc0 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 [2024-10-01 13:51:52.801210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.699 [2024-10-01 13:51:52.825294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.699 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:42.957 [2024-10-01 13:51:53.012095] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:44.332 Initializing NVMe Controllers 00:17:44.332 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:44.332 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:44.332 Initialization complete. Launching workers. 00:17:44.332 ======================================================== 00:17:44.332 Latency(us) 00:17:44.332 Device Information : IOPS MiB/s Average min max 00:17:44.332 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.00 62.50 8000.00 7931.01 8131.69 00:17:44.332 ======================================================== 00:17:44.332 Total : 500.00 62.50 8000.00 7931.01 8131.69 00:17:44.332 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.332 rmmod nvme_tcp 00:17:44.332 rmmod nvme_fabrics 00:17:44.332 rmmod nvme_keyring 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 74334 ']' 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 74334 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 74334 ']' 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 74334 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74334 00:17:44.332 killing process with pid 74334 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74334' 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 74334 00:17:44.332 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 74334 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:44.590 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:44.858 00:17:44.858 real 0m4.049s 00:17:44.858 user 0m3.676s 00:17:44.858 sys 0m0.830s 00:17:44.858 ************************************ 00:17:44.858 END TEST nvmf_wait_for_buf 00:17:44.858 ************************************ 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:44.858 00:17:44.858 real 5m31.338s 00:17:44.858 user 11m34.443s 00:17:44.858 sys 1m14.052s 00:17:44.858 ************************************ 00:17:44.858 END TEST nvmf_target_extra 00:17:44.858 ************************************ 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.858 13:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.143 13:51:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:45.143 13:51:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:45.143 13:51:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.143 13:51:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.143 ************************************ 00:17:45.143 START TEST nvmf_host 00:17:45.143 ************************************ 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:45.143 * Looking for test storage... 00:17:45.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:45.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.143 --rc genhtml_branch_coverage=1 00:17:45.143 --rc genhtml_function_coverage=1 00:17:45.143 --rc genhtml_legend=1 00:17:45.143 --rc geninfo_all_blocks=1 00:17:45.143 --rc geninfo_unexecuted_blocks=1 00:17:45.143 00:17:45.143 ' 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:45.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.143 --rc genhtml_branch_coverage=1 00:17:45.143 --rc genhtml_function_coverage=1 00:17:45.143 --rc genhtml_legend=1 00:17:45.143 --rc geninfo_all_blocks=1 00:17:45.143 --rc geninfo_unexecuted_blocks=1 00:17:45.143 00:17:45.143 ' 00:17:45.143 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:45.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.143 --rc genhtml_branch_coverage=1 00:17:45.143 --rc genhtml_function_coverage=1 00:17:45.144 --rc genhtml_legend=1 00:17:45.144 --rc geninfo_all_blocks=1 00:17:45.144 --rc geninfo_unexecuted_blocks=1 00:17:45.144 00:17:45.144 ' 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.144 --rc genhtml_branch_coverage=1 00:17:45.144 --rc genhtml_function_coverage=1 00:17:45.144 --rc genhtml_legend=1 00:17:45.144 --rc geninfo_all_blocks=1 00:17:45.144 --rc geninfo_unexecuted_blocks=1 00:17:45.144 00:17:45.144 ' 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.144 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.144 ************************************ 00:17:45.144 START TEST nvmf_identify 00:17:45.144 ************************************ 00:17:45.144 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:45.403 * Looking for test storage... 00:17:45.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.403 --rc genhtml_branch_coverage=1 00:17:45.403 --rc genhtml_function_coverage=1 00:17:45.403 --rc genhtml_legend=1 00:17:45.403 --rc geninfo_all_blocks=1 00:17:45.403 --rc geninfo_unexecuted_blocks=1 00:17:45.403 00:17:45.403 ' 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.403 --rc genhtml_branch_coverage=1 00:17:45.403 --rc genhtml_function_coverage=1 00:17:45.403 --rc genhtml_legend=1 00:17:45.403 --rc geninfo_all_blocks=1 00:17:45.403 --rc geninfo_unexecuted_blocks=1 00:17:45.403 00:17:45.403 ' 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.403 --rc genhtml_branch_coverage=1 00:17:45.403 --rc genhtml_function_coverage=1 00:17:45.403 --rc genhtml_legend=1 00:17:45.403 --rc geninfo_all_blocks=1 00:17:45.403 --rc geninfo_unexecuted_blocks=1 00:17:45.403 00:17:45.403 ' 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.403 --rc genhtml_branch_coverage=1 00:17:45.403 --rc genhtml_function_coverage=1 00:17:45.403 --rc genhtml_legend=1 00:17:45.403 --rc geninfo_all_blocks=1 00:17:45.403 --rc geninfo_unexecuted_blocks=1 00:17:45.403 00:17:45.403 ' 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.403 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.404 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:45.404 Cannot find device "nvmf_init_br" 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:45.404 Cannot find device "nvmf_init_br2" 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:45.404 Cannot find device "nvmf_tgt_br" 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.404 Cannot find device "nvmf_tgt_br2" 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:45.404 Cannot find device "nvmf_init_br" 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:45.404 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:45.663 Cannot find device "nvmf_init_br2" 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:45.663 Cannot find device "nvmf_tgt_br" 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:45.663 Cannot find device "nvmf_tgt_br2" 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:45.663 Cannot find device "nvmf_br" 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:45.663 Cannot find device "nvmf_init_if" 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:45.663 Cannot find device "nvmf_init_if2" 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:45.663 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:45.921 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:45.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:45.922 00:17:45.922 --- 10.0.0.3 ping statistics --- 00:17:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.922 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:45.922 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:45.922 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:45.922 00:17:45.922 --- 10.0.0.4 ping statistics --- 00:17:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.922 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:17:45.922 00:17:45.922 --- 10.0.0.1 ping statistics --- 00:17:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.922 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:45.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:45.922 00:17:45.922 --- 10.0.0.2 ping statistics --- 00:17:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.922 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74672 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74672 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74672 ']' 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.922 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.922 [2024-10-01 13:51:56.030941] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:45.922 [2024-10-01 13:51:56.031056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.181 [2024-10-01 13:51:56.172118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.181 [2024-10-01 13:51:56.304158] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.181 [2024-10-01 13:51:56.304500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.181 [2024-10-01 13:51:56.304659] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.181 [2024-10-01 13:51:56.304726] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.181 [2024-10-01 13:51:56.304843] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.181 [2024-10-01 13:51:56.305322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.181 [2024-10-01 13:51:56.305508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.181 [2024-10-01 13:51:56.305602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.181 [2024-10-01 13:51:56.306392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.440 [2024-10-01 13:51:56.363797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.008 [2024-10-01 13:51:57.092226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.008 Malloc0 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.008 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.272 [2024-10-01 13:51:57.192074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.272 [ 00:17:47.272 { 00:17:47.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:47.272 "subtype": "Discovery", 00:17:47.272 "listen_addresses": [ 00:17:47.272 { 00:17:47.272 "trtype": "TCP", 00:17:47.272 "adrfam": "IPv4", 00:17:47.272 "traddr": "10.0.0.3", 00:17:47.272 "trsvcid": "4420" 00:17:47.272 } 00:17:47.272 ], 00:17:47.272 "allow_any_host": true, 00:17:47.272 "hosts": [] 00:17:47.272 }, 00:17:47.272 { 00:17:47.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.272 "subtype": "NVMe", 00:17:47.272 "listen_addresses": [ 00:17:47.272 { 00:17:47.272 "trtype": "TCP", 00:17:47.272 "adrfam": "IPv4", 00:17:47.272 "traddr": "10.0.0.3", 00:17:47.272 "trsvcid": "4420" 00:17:47.272 } 00:17:47.272 ], 00:17:47.272 "allow_any_host": true, 00:17:47.272 "hosts": [], 00:17:47.272 "serial_number": "SPDK00000000000001", 00:17:47.272 "model_number": "SPDK bdev Controller", 00:17:47.272 "max_namespaces": 32, 00:17:47.272 "min_cntlid": 1, 00:17:47.272 "max_cntlid": 65519, 00:17:47.272 "namespaces": [ 00:17:47.272 { 00:17:47.272 "nsid": 1, 00:17:47.272 "bdev_name": "Malloc0", 00:17:47.272 "name": "Malloc0", 00:17:47.272 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:47.272 "eui64": "ABCDEF0123456789", 00:17:47.272 "uuid": "3097fbe6-42d9-4a7f-89ea-7924fc205064" 00:17:47.272 } 00:17:47.272 ] 00:17:47.272 } 00:17:47.272 ] 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.272 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:47.272 [2024-10-01 13:51:57.252356] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:47.272 [2024-10-01 13:51:57.252424] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74707 ] 00:17:47.272 [2024-10-01 13:51:57.396093] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:47.272 [2024-10-01 13:51:57.396223] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:47.272 [2024-10-01 13:51:57.396239] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:47.272 [2024-10-01 13:51:57.396262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:47.272 [2024-10-01 13:51:57.396279] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:47.272 [2024-10-01 13:51:57.396853] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:47.272 [2024-10-01 13:51:57.396961] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e4b750 0 00:17:47.272 [2024-10-01 13:51:57.403964] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:47.272 [2024-10-01 13:51:57.404008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:47.272 [2024-10-01 13:51:57.404024] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:47.272 [2024-10-01 13:51:57.404033] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:47.272 [2024-10-01 13:51:57.404104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.272 [2024-10-01 13:51:57.404121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.272 [2024-10-01 13:51:57.404132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.272 [2024-10-01 13:51:57.404169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:47.272 [2024-10-01 13:51:57.404231] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.272 [2024-10-01 13:51:57.411953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.272 [2024-10-01 13:51:57.411992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.272 [2024-10-01 13:51:57.412005] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.272 [2024-10-01 13:51:57.412012] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.272 [2024-10-01 13:51:57.412031] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:47.272 [2024-10-01 13:51:57.412042] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:47.272 [2024-10-01 13:51:57.412050] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:47.272 [2024-10-01 13:51:57.412076] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.272 [2024-10-01 13:51:57.412084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.272 [2024-10-01 13:51:57.412088] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.412103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.273 [2024-10-01 13:51:57.412139] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.412211] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.412222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.412227] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412231] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.273 [2024-10-01 13:51:57.412239] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:47.273 [2024-10-01 13:51:57.412248] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:47.273 [2024-10-01 13:51:57.412258] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412264] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412268] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.412277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.273 [2024-10-01 13:51:57.412303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.412350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.412358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.412363] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.273 [2024-10-01 13:51:57.412375] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:47.273 [2024-10-01 13:51:57.412386] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.273 [2024-10-01 13:51:57.412395] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412405] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.412414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.273 [2024-10-01 13:51:57.412436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.412483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.412491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.412496] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412500] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.273 [2024-10-01 13:51:57.412507] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.273 [2024-10-01 13:51:57.412519] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412525] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412530] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.412538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.273 [2024-10-01 13:51:57.412560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.412609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.412618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.412622] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.273 [2024-10-01 13:51:57.412633] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:47.273 [2024-10-01 13:51:57.412639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:47.273 [2024-10-01 13:51:57.412648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.273 [2024-10-01 13:51:57.412755] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:47.273 [2024-10-01 13:51:57.412762] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.273 [2024-10-01 13:51:57.412773] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412778] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412782] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.412791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.273 [2024-10-01 13:51:57.412814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.412871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.412880] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.412884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.273 [2024-10-01 13:51:57.412895] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.273 [2024-10-01 13:51:57.412907] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412933] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.412939] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.412948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.273 [2024-10-01 13:51:57.412976] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.413026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.413038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.413042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.273 [2024-10-01 13:51:57.413052] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.273 [2024-10-01 13:51:57.413059] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:47.273 [2024-10-01 13:51:57.413068] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:47.273 [2024-10-01 13:51:57.413089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.273 [2024-10-01 13:51:57.413103] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413110] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.413119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.273 [2024-10-01 13:51:57.413146] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.413241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.273 [2024-10-01 13:51:57.413249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.273 [2024-10-01 13:51:57.413254] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413259] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4b750): datao=0, datal=4096, cccid=0 00:17:47.273 [2024-10-01 13:51:57.413264] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eaf840) on tqpair(0x1e4b750): expected_datao=0, payload_size=4096 00:17:47.273 [2024-10-01 13:51:57.413270] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413280] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413285] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413296] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.413303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.413308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413313] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.273 [2024-10-01 13:51:57.413324] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:47.273 [2024-10-01 13:51:57.413330] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:47.273 [2024-10-01 13:51:57.413335] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:47.273 [2024-10-01 13:51:57.413341] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:47.273 [2024-10-01 13:51:57.413347] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:47.273 [2024-10-01 13:51:57.413352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:47.273 [2024-10-01 13:51:57.413363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.273 [2024-10-01 13:51:57.413379] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.273 [2024-10-01 13:51:57.413399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.273 [2024-10-01 13:51:57.413424] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.273 [2024-10-01 13:51:57.413484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.273 [2024-10-01 13:51:57.413493] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.273 [2024-10-01 13:51:57.413497] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.273 [2024-10-01 13:51:57.413502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.274 [2024-10-01 13:51:57.413512] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.413529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.274 [2024-10-01 13:51:57.413537] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413542] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413546] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.413553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.274 [2024-10-01 13:51:57.413561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413566] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413570] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.413577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.274 [2024-10-01 13:51:57.413584] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413589] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.413600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.274 [2024-10-01 13:51:57.413606] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.274 [2024-10-01 13:51:57.413623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.274 [2024-10-01 13:51:57.413633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.413646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.274 [2024-10-01 13:51:57.413672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 0, qid 0 00:17:47.274 [2024-10-01 13:51:57.413685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf9c0, cid 1, qid 0 00:17:47.274 [2024-10-01 13:51:57.413695] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafb40, cid 2, qid 0 00:17:47.274 [2024-10-01 13:51:57.413705] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafcc0, cid 3, qid 0 00:17:47.274 [2024-10-01 13:51:57.413714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafe40, cid 4, qid 0 00:17:47.274 [2024-10-01 13:51:57.413785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.274 [2024-10-01 13:51:57.413801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.274 [2024-10-01 13:51:57.413810] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafe40) on tqpair=0x1e4b750 00:17:47.274 [2024-10-01 13:51:57.413831] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:47.274 [2024-10-01 13:51:57.413843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:47.274 [2024-10-01 13:51:57.413869] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.413881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.413897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.274 [2024-10-01 13:51:57.413965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafe40, cid 4, qid 0 00:17:47.274 [2024-10-01 13:51:57.414029] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.274 [2024-10-01 13:51:57.414046] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.274 [2024-10-01 13:51:57.414056] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414064] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4b750): datao=0, datal=4096, cccid=4 00:17:47.274 [2024-10-01 13:51:57.414074] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eafe40) on tqpair(0x1e4b750): expected_datao=0, payload_size=4096 00:17:47.274 [2024-10-01 13:51:57.414084] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414102] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414112] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414133] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.274 [2024-10-01 13:51:57.414146] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.274 [2024-10-01 13:51:57.414156] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414166] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafe40) on tqpair=0x1e4b750 00:17:47.274 [2024-10-01 13:51:57.414196] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:47.274 [2024-10-01 13:51:57.414256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.414284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.274 [2024-10-01 13:51:57.414299] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414308] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.414329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.274 [2024-10-01 13:51:57.414379] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafe40, cid 4, qid 0 00:17:47.274 [2024-10-01 13:51:57.414396] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaffc0, cid 5, qid 0 00:17:47.274 [2024-10-01 13:51:57.414488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.274 [2024-10-01 13:51:57.414506] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.274 [2024-10-01 13:51:57.414514] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414523] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4b750): datao=0, datal=1024, cccid=4 00:17:47.274 [2024-10-01 13:51:57.414547] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eafe40) on tqpair(0x1e4b750): expected_datao=0, payload_size=1024 00:17:47.274 [2024-10-01 13:51:57.414560] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414575] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414585] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414597] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.274 [2024-10-01 13:51:57.414608] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.274 [2024-10-01 13:51:57.414617] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414626] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaffc0) on tqpair=0x1e4b750 00:17:47.274 [2024-10-01 13:51:57.414666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.274 [2024-10-01 13:51:57.414689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.274 [2024-10-01 13:51:57.414698] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafe40) on tqpair=0x1e4b750 00:17:47.274 [2024-10-01 13:51:57.414731] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414743] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.414760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.274 [2024-10-01 13:51:57.414808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafe40, cid 4, qid 0 00:17:47.274 [2024-10-01 13:51:57.414879] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.274 [2024-10-01 13:51:57.414897] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.274 [2024-10-01 13:51:57.414907] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414942] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4b750): datao=0, datal=3072, cccid=4 00:17:47.274 [2024-10-01 13:51:57.414955] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eafe40) on tqpair(0x1e4b750): expected_datao=0, payload_size=3072 00:17:47.274 [2024-10-01 13:51:57.414965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414979] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.414989] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.415008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.274 [2024-10-01 13:51:57.415022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.274 [2024-10-01 13:51:57.415032] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.415040] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafe40) on tqpair=0x1e4b750 00:17:47.274 [2024-10-01 13:51:57.415062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.415074] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4b750) 00:17:47.274 [2024-10-01 13:51:57.415091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.274 [2024-10-01 13:51:57.415143] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafe40, cid 4, qid 0 00:17:47.274 [2024-10-01 13:51:57.415210] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.274 [2024-10-01 13:51:57.415227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.274 [2024-10-01 13:51:57.415237] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.415245] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4b750): datao=0, datal=8, cccid=4 00:17:47.274 [2024-10-01 13:51:57.415254] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eafe40) on tqpair(0x1e4b750): expected_datao=0, payload_size=8 00:17:47.274 [2024-10-01 13:51:57.415264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.415277] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.415287] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.274 [2024-10-01 13:51:57.415323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.274 [2024-10-01 13:51:57.415339] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.275 [2024-10-01 13:51:57.415348] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.275 [2024-10-01 13:51:57.415358] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafe40) on tqpair=0x1e4b750 00:17:47.275 ===================================================== 00:17:47.275 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:47.275 ===================================================== 00:17:47.275 Controller Capabilities/Features 00:17:47.275 ================================ 00:17:47.275 Vendor ID: 0000 00:17:47.275 Subsystem Vendor ID: 0000 00:17:47.275 Serial Number: .................... 00:17:47.275 Model Number: ........................................ 00:17:47.275 Firmware Version: 25.01 00:17:47.275 Recommended Arb Burst: 0 00:17:47.275 IEEE OUI Identifier: 00 00 00 00:17:47.275 Multi-path I/O 00:17:47.275 May have multiple subsystem ports: No 00:17:47.275 May have multiple controllers: No 00:17:47.275 Associated with SR-IOV VF: No 00:17:47.275 Max Data Transfer Size: 131072 00:17:47.275 Max Number of Namespaces: 0 00:17:47.275 Max Number of I/O Queues: 1024 00:17:47.275 NVMe Specification Version (VS): 1.3 00:17:47.275 NVMe Specification Version (Identify): 1.3 00:17:47.275 Maximum Queue Entries: 128 00:17:47.275 Contiguous Queues Required: Yes 00:17:47.275 Arbitration Mechanisms Supported 00:17:47.275 Weighted Round Robin: Not Supported 00:17:47.275 Vendor Specific: Not Supported 00:17:47.275 Reset Timeout: 15000 ms 00:17:47.275 Doorbell Stride: 4 bytes 00:17:47.275 NVM Subsystem Reset: Not Supported 00:17:47.275 Command Sets Supported 00:17:47.275 NVM Command Set: Supported 00:17:47.275 Boot Partition: Not Supported 00:17:47.275 Memory Page Size Minimum: 4096 bytes 00:17:47.275 Memory Page Size Maximum: 4096 bytes 00:17:47.275 Persistent Memory Region: Not Supported 00:17:47.275 Optional Asynchronous Events Supported 00:17:47.275 Namespace Attribute Notices: Not Supported 00:17:47.275 Firmware Activation Notices: Not Supported 00:17:47.275 ANA Change Notices: Not Supported 00:17:47.275 PLE Aggregate Log Change Notices: Not Supported 00:17:47.275 LBA Status Info Alert Notices: Not Supported 00:17:47.275 EGE Aggregate Log Change Notices: Not Supported 00:17:47.275 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.275 Zone Descriptor Change Notices: Not Supported 00:17:47.275 Discovery Log Change Notices: Supported 00:17:47.275 Controller Attributes 00:17:47.275 128-bit Host Identifier: Not Supported 00:17:47.275 Non-Operational Permissive Mode: Not Supported 00:17:47.275 NVM Sets: Not Supported 00:17:47.275 Read Recovery Levels: Not Supported 00:17:47.275 Endurance Groups: Not Supported 00:17:47.275 Predictable Latency Mode: Not Supported 00:17:47.275 Traffic Based Keep ALive: Not Supported 00:17:47.275 Namespace Granularity: Not Supported 00:17:47.275 SQ Associations: Not Supported 00:17:47.275 UUID List: Not Supported 00:17:47.275 Multi-Domain Subsystem: Not Supported 00:17:47.275 Fixed Capacity Management: Not Supported 00:17:47.275 Variable Capacity Management: Not Supported 00:17:47.275 Delete Endurance Group: Not Supported 00:17:47.275 Delete NVM Set: Not Supported 00:17:47.275 Extended LBA Formats Supported: Not Supported 00:17:47.275 Flexible Data Placement Supported: Not Supported 00:17:47.275 00:17:47.275 Controller Memory Buffer Support 00:17:47.275 ================================ 00:17:47.275 Supported: No 00:17:47.275 00:17:47.275 Persistent Memory Region Support 00:17:47.275 ================================ 00:17:47.275 Supported: No 00:17:47.275 00:17:47.275 Admin Command Set Attributes 00:17:47.275 ============================ 00:17:47.275 Security Send/Receive: Not Supported 00:17:47.275 Format NVM: Not Supported 00:17:47.275 Firmware Activate/Download: Not Supported 00:17:47.275 Namespace Management: Not Supported 00:17:47.275 Device Self-Test: Not Supported 00:17:47.275 Directives: Not Supported 00:17:47.275 NVMe-MI: Not Supported 00:17:47.275 Virtualization Management: Not Supported 00:17:47.275 Doorbell Buffer Config: Not Supported 00:17:47.275 Get LBA Status Capability: Not Supported 00:17:47.275 Command & Feature Lockdown Capability: Not Supported 00:17:47.275 Abort Command Limit: 1 00:17:47.275 Async Event Request Limit: 4 00:17:47.275 Number of Firmware Slots: N/A 00:17:47.275 Firmware Slot 1 Read-Only: N/A 00:17:47.275 Firmware Activation Without Reset: N/A 00:17:47.275 Multiple Update Detection Support: N/A 00:17:47.275 Firmware Update Granularity: No Information Provided 00:17:47.275 Per-Namespace SMART Log: No 00:17:47.275 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.275 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:47.275 Command Effects Log Page: Not Supported 00:17:47.275 Get Log Page Extended Data: Supported 00:17:47.275 Telemetry Log Pages: Not Supported 00:17:47.275 Persistent Event Log Pages: Not Supported 00:17:47.275 Supported Log Pages Log Page: May Support 00:17:47.275 Commands Supported & Effects Log Page: Not Supported 00:17:47.275 Feature Identifiers & Effects Log Page:May Support 00:17:47.275 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.275 Data Area 4 for Telemetry Log: Not Supported 00:17:47.275 Error Log Page Entries Supported: 128 00:17:47.275 Keep Alive: Not Supported 00:17:47.275 00:17:47.275 NVM Command Set Attributes 00:17:47.275 ========================== 00:17:47.275 Submission Queue Entry Size 00:17:47.275 Max: 1 00:17:47.275 Min: 1 00:17:47.275 Completion Queue Entry Size 00:17:47.275 Max: 1 00:17:47.275 Min: 1 00:17:47.275 Number of Namespaces: 0 00:17:47.275 Compare Command: Not Supported 00:17:47.275 Write Uncorrectable Command: Not Supported 00:17:47.275 Dataset Management Command: Not Supported 00:17:47.275 Write Zeroes Command: Not Supported 00:17:47.275 Set Features Save Field: Not Supported 00:17:47.275 Reservations: Not Supported 00:17:47.275 Timestamp: Not Supported 00:17:47.275 Copy: Not Supported 00:17:47.275 Volatile Write Cache: Not Present 00:17:47.275 Atomic Write Unit (Normal): 1 00:17:47.275 Atomic Write Unit (PFail): 1 00:17:47.275 Atomic Compare & Write Unit: 1 00:17:47.275 Fused Compare & Write: Supported 00:17:47.275 Scatter-Gather List 00:17:47.275 SGL Command Set: Supported 00:17:47.275 SGL Keyed: Supported 00:17:47.275 SGL Bit Bucket Descriptor: Not Supported 00:17:47.275 SGL Metadata Pointer: Not Supported 00:17:47.275 Oversized SGL: Not Supported 00:17:47.275 SGL Metadata Address: Not Supported 00:17:47.275 SGL Offset: Supported 00:17:47.275 Transport SGL Data Block: Not Supported 00:17:47.275 Replay Protected Memory Block: Not Supported 00:17:47.275 00:17:47.275 Firmware Slot Information 00:17:47.275 ========================= 00:17:47.275 Active slot: 0 00:17:47.275 00:17:47.275 00:17:47.275 Error Log 00:17:47.275 ========= 00:17:47.275 00:17:47.275 Active Namespaces 00:17:47.275 ================= 00:17:47.275 Discovery Log Page 00:17:47.275 ================== 00:17:47.275 Generation Counter: 2 00:17:47.275 Number of Records: 2 00:17:47.275 Record Format: 0 00:17:47.275 00:17:47.275 Discovery Log Entry 0 00:17:47.275 ---------------------- 00:17:47.275 Transport Type: 3 (TCP) 00:17:47.275 Address Family: 1 (IPv4) 00:17:47.275 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:47.275 Entry Flags: 00:17:47.275 Duplicate Returned Information: 1 00:17:47.275 Explicit Persistent Connection Support for Discovery: 1 00:17:47.275 Transport Requirements: 00:17:47.275 Secure Channel: Not Required 00:17:47.275 Port ID: 0 (0x0000) 00:17:47.275 Controller ID: 65535 (0xffff) 00:17:47.275 Admin Max SQ Size: 128 00:17:47.275 Transport Service Identifier: 4420 00:17:47.275 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:47.275 Transport Address: 10.0.0.3 00:17:47.275 Discovery Log Entry 1 00:17:47.275 ---------------------- 00:17:47.275 Transport Type: 3 (TCP) 00:17:47.275 Address Family: 1 (IPv4) 00:17:47.275 Subsystem Type: 2 (NVM Subsystem) 00:17:47.275 Entry Flags: 00:17:47.275 Duplicate Returned Information: 0 00:17:47.275 Explicit Persistent Connection Support for Discovery: 0 00:17:47.275 Transport Requirements: 00:17:47.275 Secure Channel: Not Required 00:17:47.275 Port ID: 0 (0x0000) 00:17:47.275 Controller ID: 65535 (0xffff) 00:17:47.275 Admin Max SQ Size: 128 00:17:47.275 Transport Service Identifier: 4420 00:17:47.275 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:47.275 Transport Address: 10.0.0.3 [2024-10-01 13:51:57.415530] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:47.275 [2024-10-01 13:51:57.415557] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4b750 00:17:47.276 [2024-10-01 13:51:57.415572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.276 [2024-10-01 13:51:57.415584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf9c0) on tqpair=0x1e4b750 00:17:47.276 [2024-10-01 13:51:57.415594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.276 [2024-10-01 13:51:57.415605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafb40) on tqpair=0x1e4b750 00:17:47.276 [2024-10-01 13:51:57.415616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.276 [2024-10-01 13:51:57.415625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafcc0) on tqpair=0x1e4b750 00:17:47.276 [2024-10-01 13:51:57.415635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.276 [2024-10-01 13:51:57.415652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.415663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.415672] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4b750) 00:17:47.276 [2024-10-01 13:51:57.415688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.276 [2024-10-01 13:51:57.415732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafcc0, cid 3, qid 0 00:17:47.276 [2024-10-01 13:51:57.415790] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.276 [2024-10-01 13:51:57.415808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.276 [2024-10-01 13:51:57.415818] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.415827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafcc0) on tqpair=0x1e4b750 00:17:47.276 [2024-10-01 13:51:57.415843] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.415855] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.415864] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4b750) 00:17:47.276 [2024-10-01 13:51:57.415879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.276 [2024-10-01 13:51:57.419934] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafcc0, cid 3, qid 0 00:17:47.276 [2024-10-01 13:51:57.419976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.276 [2024-10-01 13:51:57.419994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.276 [2024-10-01 13:51:57.420004] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.420010] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafcc0) on tqpair=0x1e4b750 00:17:47.276 [2024-10-01 13:51:57.420018] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:47.276 [2024-10-01 13:51:57.420023] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:47.276 [2024-10-01 13:51:57.420041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.420048] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.420052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4b750) 00:17:47.276 [2024-10-01 13:51:57.420063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.276 [2024-10-01 13:51:57.420097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafcc0, cid 3, qid 0 00:17:47.276 [2024-10-01 13:51:57.420160] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.276 [2024-10-01 13:51:57.420169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.276 [2024-10-01 13:51:57.420173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.276 [2024-10-01 13:51:57.420178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafcc0) on tqpair=0x1e4b750 00:17:47.276 [2024-10-01 13:51:57.420189] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:17:47.276 00:17:47.565 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:47.565 [2024-10-01 13:51:57.470017] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:47.565 [2024-10-01 13:51:57.470069] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74709 ] 00:17:47.565 [2024-10-01 13:51:57.610358] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:47.565 [2024-10-01 13:51:57.610444] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:47.565 [2024-10-01 13:51:57.610454] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:47.565 [2024-10-01 13:51:57.610470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:47.565 [2024-10-01 13:51:57.610484] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:47.565 [2024-10-01 13:51:57.610903] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:47.565 [2024-10-01 13:51:57.611003] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2376750 0 00:17:47.565 [2024-10-01 13:51:57.617945] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:47.565 [2024-10-01 13:51:57.617975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:47.565 [2024-10-01 13:51:57.617983] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:47.565 [2024-10-01 13:51:57.617987] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:47.565 [2024-10-01 13:51:57.618036] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.618046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.618051] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.565 [2024-10-01 13:51:57.618067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:47.565 [2024-10-01 13:51:57.618103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.565 [2024-10-01 13:51:57.625939] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.565 [2024-10-01 13:51:57.625964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.565 [2024-10-01 13:51:57.625971] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.625977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.565 [2024-10-01 13:51:57.625990] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:47.565 [2024-10-01 13:51:57.626001] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:47.565 [2024-10-01 13:51:57.626008] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:47.565 [2024-10-01 13:51:57.626028] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626036] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626040] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.565 [2024-10-01 13:51:57.626051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.565 [2024-10-01 13:51:57.626083] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.565 [2024-10-01 13:51:57.626142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.565 [2024-10-01 13:51:57.626151] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.565 [2024-10-01 13:51:57.626155] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626160] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.565 [2024-10-01 13:51:57.626167] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:47.565 [2024-10-01 13:51:57.626176] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:47.565 [2024-10-01 13:51:57.626187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626192] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626197] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.565 [2024-10-01 13:51:57.626206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.565 [2024-10-01 13:51:57.626231] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.565 [2024-10-01 13:51:57.626278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.565 [2024-10-01 13:51:57.626286] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.565 [2024-10-01 13:51:57.626291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.565 [2024-10-01 13:51:57.626303] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:47.565 [2024-10-01 13:51:57.626313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.565 [2024-10-01 13:51:57.626323] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.565 [2024-10-01 13:51:57.626341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.565 [2024-10-01 13:51:57.626372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.565 [2024-10-01 13:51:57.626419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.565 [2024-10-01 13:51:57.626428] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.565 [2024-10-01 13:51:57.626432] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626437] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.565 [2024-10-01 13:51:57.626444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.565 [2024-10-01 13:51:57.626457] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626467] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.565 [2024-10-01 13:51:57.626476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.565 [2024-10-01 13:51:57.626501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.565 [2024-10-01 13:51:57.626561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.565 [2024-10-01 13:51:57.626571] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.565 [2024-10-01 13:51:57.626576] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626581] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.565 [2024-10-01 13:51:57.626587] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:47.565 [2024-10-01 13:51:57.626593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:47.565 [2024-10-01 13:51:57.626602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.565 [2024-10-01 13:51:57.626709] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:47.565 [2024-10-01 13:51:57.626715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.565 [2024-10-01 13:51:57.626726] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626732] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626736] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.565 [2024-10-01 13:51:57.626745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.565 [2024-10-01 13:51:57.626770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.565 [2024-10-01 13:51:57.626823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.565 [2024-10-01 13:51:57.626832] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.565 [2024-10-01 13:51:57.626836] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.565 [2024-10-01 13:51:57.626848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.565 [2024-10-01 13:51:57.626860] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626866] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.565 [2024-10-01 13:51:57.626880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.565 [2024-10-01 13:51:57.626905] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.565 [2024-10-01 13:51:57.626970] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.565 [2024-10-01 13:51:57.626981] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.565 [2024-10-01 13:51:57.626986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.565 [2024-10-01 13:51:57.626990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.565 [2024-10-01 13:51:57.626997] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.565 [2024-10-01 13:51:57.627003] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627013] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:47.566 [2024-10-01 13:51:57.627032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.566 [2024-10-01 13:51:57.627088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.566 [2024-10-01 13:51:57.627188] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.566 [2024-10-01 13:51:57.627197] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.566 [2024-10-01 13:51:57.627202] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627206] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=4096, cccid=0 00:17:47.566 [2024-10-01 13:51:57.627212] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23da840) on tqpair(0x2376750): expected_datao=0, payload_size=4096 00:17:47.566 [2024-10-01 13:51:57.627217] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627227] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627232] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.566 [2024-10-01 13:51:57.627250] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.566 [2024-10-01 13:51:57.627255] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627260] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.566 [2024-10-01 13:51:57.627271] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:47.566 [2024-10-01 13:51:57.627277] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:47.566 [2024-10-01 13:51:57.627282] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:47.566 [2024-10-01 13:51:57.627287] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:47.566 [2024-10-01 13:51:57.627293] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:47.566 [2024-10-01 13:51:57.627299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627310] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627327] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627333] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627338] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.566 [2024-10-01 13:51:57.627373] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.566 [2024-10-01 13:51:57.627425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.566 [2024-10-01 13:51:57.627433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.566 [2024-10-01 13:51:57.627438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.566 [2024-10-01 13:51:57.627453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627458] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627463] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.566 [2024-10-01 13:51:57.627479] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627483] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627488] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.566 [2024-10-01 13:51:57.627502] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627507] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627511] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.566 [2024-10-01 13:51:57.627525] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627530] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627534] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.566 [2024-10-01 13:51:57.627547] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627564] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627576] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.566 [2024-10-01 13:51:57.627616] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da840, cid 0, qid 0 00:17:47.566 [2024-10-01 13:51:57.627626] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23da9c0, cid 1, qid 0 00:17:47.566 [2024-10-01 13:51:57.627632] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dab40, cid 2, qid 0 00:17:47.566 [2024-10-01 13:51:57.627637] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.566 [2024-10-01 13:51:57.627642] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dae40, cid 4, qid 0 00:17:47.566 [2024-10-01 13:51:57.627729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.566 [2024-10-01 13:51:57.627737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.566 [2024-10-01 13:51:57.627742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627747] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dae40) on tqpair=0x2376750 00:17:47.566 [2024-10-01 13:51:57.627753] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:47.566 [2024-10-01 13:51:57.627760] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.627794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627800] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627804] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.627813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.566 [2024-10-01 13:51:57.627837] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dae40, cid 4, qid 0 00:17:47.566 [2024-10-01 13:51:57.627887] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.566 [2024-10-01 13:51:57.627895] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.566 [2024-10-01 13:51:57.627900] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.627904] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dae40) on tqpair=0x2376750 00:17:47.566 [2024-10-01 13:51:57.627990] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.628010] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.628021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.628026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2376750) 00:17:47.566 [2024-10-01 13:51:57.628036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.566 [2024-10-01 13:51:57.628063] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dae40, cid 4, qid 0 00:17:47.566 [2024-10-01 13:51:57.628131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.566 [2024-10-01 13:51:57.628139] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.566 [2024-10-01 13:51:57.628144] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.628148] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=4096, cccid=4 00:17:47.566 [2024-10-01 13:51:57.628154] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23dae40) on tqpair(0x2376750): expected_datao=0, payload_size=4096 00:17:47.566 [2024-10-01 13:51:57.628159] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.628168] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.628173] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.628183] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.566 [2024-10-01 13:51:57.628191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.566 [2024-10-01 13:51:57.628195] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.566 [2024-10-01 13:51:57.628200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dae40) on tqpair=0x2376750 00:17:47.566 [2024-10-01 13:51:57.628222] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:47.566 [2024-10-01 13:51:57.628236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.628249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.566 [2024-10-01 13:51:57.628260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.628274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.628301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dae40, cid 4, qid 0 00:17:47.567 [2024-10-01 13:51:57.628368] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.567 [2024-10-01 13:51:57.628377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.567 [2024-10-01 13:51:57.628381] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628386] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=4096, cccid=4 00:17:47.567 [2024-10-01 13:51:57.628391] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23dae40) on tqpair(0x2376750): expected_datao=0, payload_size=4096 00:17:47.567 [2024-10-01 13:51:57.628396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628404] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628410] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.567 [2024-10-01 13:51:57.628427] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.567 [2024-10-01 13:51:57.628432] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628436] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dae40) on tqpair=0x2376750 00:17:47.567 [2024-10-01 13:51:57.628450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628463] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628475] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.628489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.628514] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dae40, cid 4, qid 0 00:17:47.567 [2024-10-01 13:51:57.628573] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.567 [2024-10-01 13:51:57.628582] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.567 [2024-10-01 13:51:57.628586] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628591] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=4096, cccid=4 00:17:47.567 [2024-10-01 13:51:57.628596] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23dae40) on tqpair(0x2376750): expected_datao=0, payload_size=4096 00:17:47.567 [2024-10-01 13:51:57.628601] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628609] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628614] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628624] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.567 [2024-10-01 13:51:57.628632] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.567 [2024-10-01 13:51:57.628636] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628641] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dae40) on tqpair=0x2376750 00:17:47.567 [2024-10-01 13:51:57.628658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628670] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628682] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628703] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628709] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.567 [2024-10-01 13:51:57.628715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:47.567 [2024-10-01 13:51:57.628721] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:47.567 [2024-10-01 13:51:57.628741] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.628756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.628765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628770] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628774] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.628782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.567 [2024-10-01 13:51:57.628817] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dae40, cid 4, qid 0 00:17:47.567 [2024-10-01 13:51:57.628827] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dafc0, cid 5, qid 0 00:17:47.567 [2024-10-01 13:51:57.628887] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.567 [2024-10-01 13:51:57.628895] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.567 [2024-10-01 13:51:57.628900] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628904] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dae40) on tqpair=0x2376750 00:17:47.567 [2024-10-01 13:51:57.628928] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.567 [2024-10-01 13:51:57.628938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.567 [2024-10-01 13:51:57.628943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628948] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dafc0) on tqpair=0x2376750 00:17:47.567 [2024-10-01 13:51:57.628962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.628968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.628977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.629003] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dafc0, cid 5, qid 0 00:17:47.567 [2024-10-01 13:51:57.629051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.567 [2024-10-01 13:51:57.629060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.567 [2024-10-01 13:51:57.629064] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629069] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dafc0) on tqpair=0x2376750 00:17:47.567 [2024-10-01 13:51:57.629082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629088] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.629096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.629119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dafc0, cid 5, qid 0 00:17:47.567 [2024-10-01 13:51:57.629170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.567 [2024-10-01 13:51:57.629178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.567 [2024-10-01 13:51:57.629183] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629188] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dafc0) on tqpair=0x2376750 00:17:47.567 [2024-10-01 13:51:57.629200] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629206] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.629214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.629236] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dafc0, cid 5, qid 0 00:17:47.567 [2024-10-01 13:51:57.629282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.567 [2024-10-01 13:51:57.629303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.567 [2024-10-01 13:51:57.629307] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629312] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dafc0) on tqpair=0x2376750 00:17:47.567 [2024-10-01 13:51:57.629336] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.629353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.629362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629368] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.629376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.629385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.629398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.629407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.567 [2024-10-01 13:51:57.629413] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2376750) 00:17:47.567 [2024-10-01 13:51:57.629420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.567 [2024-10-01 13:51:57.629446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dafc0, cid 5, qid 0 00:17:47.567 [2024-10-01 13:51:57.629455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dae40, cid 4, qid 0 00:17:47.567 [2024-10-01 13:51:57.629461] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23db140, cid 6, qid 0 00:17:47.567 [2024-10-01 13:51:57.629466] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23db2c0, cid 7, qid 0 00:17:47.567 [2024-10-01 13:51:57.629605] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.568 [2024-10-01 13:51:57.629613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.568 [2024-10-01 13:51:57.629618] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629622] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=8192, cccid=5 00:17:47.568 [2024-10-01 13:51:57.629629] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23dafc0) on tqpair(0x2376750): expected_datao=0, payload_size=8192 00:17:47.568 [2024-10-01 13:51:57.629634] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629657] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629664] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629671] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.568 [2024-10-01 13:51:57.629678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.568 [2024-10-01 13:51:57.629683] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629687] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=512, cccid=4 00:17:47.568 [2024-10-01 13:51:57.629692] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23dae40) on tqpair(0x2376750): expected_datao=0, payload_size=512 00:17:47.568 [2024-10-01 13:51:57.629697] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629705] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629709] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629716] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.568 [2024-10-01 13:51:57.629722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.568 [2024-10-01 13:51:57.629727] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629731] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=512, cccid=6 00:17:47.568 [2024-10-01 13:51:57.629736] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23db140) on tqpair(0x2376750): expected_datao=0, payload_size=512 00:17:47.568 [2024-10-01 13:51:57.629741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629748] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629752] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629759] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.568 [2024-10-01 13:51:57.629765] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.568 [2024-10-01 13:51:57.629770] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629774] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2376750): datao=0, datal=4096, cccid=7 00:17:47.568 [2024-10-01 13:51:57.629779] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23db2c0) on tqpair(0x2376750): expected_datao=0, payload_size=4096 00:17:47.568 [2024-10-01 13:51:57.629784] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629791] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629796] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.568 [2024-10-01 13:51:57.629813] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.568 [2024-10-01 13:51:57.629817] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629822] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dafc0) on tqpair=0x2376750 00:17:47.568 [2024-10-01 13:51:57.629842] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.568 [2024-10-01 13:51:57.629850] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.568 [2024-10-01 13:51:57.629855] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629859] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dae40) on tqpair=0x2376750 00:17:47.568 [2024-10-01 13:51:57.629874] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.568 [2024-10-01 13:51:57.629882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.568 [2024-10-01 13:51:57.629887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.629891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23db140) on tqpair=0x2376750 00:17:47.568 [2024-10-01 13:51:57.629900] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.568 [2024-10-01 13:51:57.629907] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.568 [2024-10-01 13:51:57.633933] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.568 [2024-10-01 13:51:57.633944] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23db2c0) on tqpair=0x2376750 00:17:47.568 ===================================================== 00:17:47.568 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:47.568 ===================================================== 00:17:47.568 Controller Capabilities/Features 00:17:47.568 ================================ 00:17:47.568 Vendor ID: 8086 00:17:47.568 Subsystem Vendor ID: 8086 00:17:47.568 Serial Number: SPDK00000000000001 00:17:47.568 Model Number: SPDK bdev Controller 00:17:47.568 Firmware Version: 25.01 00:17:47.568 Recommended Arb Burst: 6 00:17:47.568 IEEE OUI Identifier: e4 d2 5c 00:17:47.568 Multi-path I/O 00:17:47.568 May have multiple subsystem ports: Yes 00:17:47.568 May have multiple controllers: Yes 00:17:47.568 Associated with SR-IOV VF: No 00:17:47.568 Max Data Transfer Size: 131072 00:17:47.568 Max Number of Namespaces: 32 00:17:47.568 Max Number of I/O Queues: 127 00:17:47.568 NVMe Specification Version (VS): 1.3 00:17:47.568 NVMe Specification Version (Identify): 1.3 00:17:47.568 Maximum Queue Entries: 128 00:17:47.568 Contiguous Queues Required: Yes 00:17:47.568 Arbitration Mechanisms Supported 00:17:47.568 Weighted Round Robin: Not Supported 00:17:47.568 Vendor Specific: Not Supported 00:17:47.568 Reset Timeout: 15000 ms 00:17:47.568 Doorbell Stride: 4 bytes 00:17:47.568 NVM Subsystem Reset: Not Supported 00:17:47.568 Command Sets Supported 00:17:47.568 NVM Command Set: Supported 00:17:47.568 Boot Partition: Not Supported 00:17:47.568 Memory Page Size Minimum: 4096 bytes 00:17:47.568 Memory Page Size Maximum: 4096 bytes 00:17:47.568 Persistent Memory Region: Not Supported 00:17:47.568 Optional Asynchronous Events Supported 00:17:47.568 Namespace Attribute Notices: Supported 00:17:47.568 Firmware Activation Notices: Not Supported 00:17:47.568 ANA Change Notices: Not Supported 00:17:47.568 PLE Aggregate Log Change Notices: Not Supported 00:17:47.568 LBA Status Info Alert Notices: Not Supported 00:17:47.568 EGE Aggregate Log Change Notices: Not Supported 00:17:47.568 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.568 Zone Descriptor Change Notices: Not Supported 00:17:47.568 Discovery Log Change Notices: Not Supported 00:17:47.568 Controller Attributes 00:17:47.568 128-bit Host Identifier: Supported 00:17:47.568 Non-Operational Permissive Mode: Not Supported 00:17:47.568 NVM Sets: Not Supported 00:17:47.568 Read Recovery Levels: Not Supported 00:17:47.568 Endurance Groups: Not Supported 00:17:47.568 Predictable Latency Mode: Not Supported 00:17:47.568 Traffic Based Keep ALive: Not Supported 00:17:47.568 Namespace Granularity: Not Supported 00:17:47.568 SQ Associations: Not Supported 00:17:47.568 UUID List: Not Supported 00:17:47.568 Multi-Domain Subsystem: Not Supported 00:17:47.568 Fixed Capacity Management: Not Supported 00:17:47.568 Variable Capacity Management: Not Supported 00:17:47.568 Delete Endurance Group: Not Supported 00:17:47.568 Delete NVM Set: Not Supported 00:17:47.568 Extended LBA Formats Supported: Not Supported 00:17:47.568 Flexible Data Placement Supported: Not Supported 00:17:47.568 00:17:47.568 Controller Memory Buffer Support 00:17:47.568 ================================ 00:17:47.568 Supported: No 00:17:47.568 00:17:47.568 Persistent Memory Region Support 00:17:47.568 ================================ 00:17:47.568 Supported: No 00:17:47.568 00:17:47.568 Admin Command Set Attributes 00:17:47.568 ============================ 00:17:47.568 Security Send/Receive: Not Supported 00:17:47.568 Format NVM: Not Supported 00:17:47.568 Firmware Activate/Download: Not Supported 00:17:47.568 Namespace Management: Not Supported 00:17:47.568 Device Self-Test: Not Supported 00:17:47.568 Directives: Not Supported 00:17:47.568 NVMe-MI: Not Supported 00:17:47.568 Virtualization Management: Not Supported 00:17:47.568 Doorbell Buffer Config: Not Supported 00:17:47.568 Get LBA Status Capability: Not Supported 00:17:47.568 Command & Feature Lockdown Capability: Not Supported 00:17:47.568 Abort Command Limit: 4 00:17:47.568 Async Event Request Limit: 4 00:17:47.568 Number of Firmware Slots: N/A 00:17:47.568 Firmware Slot 1 Read-Only: N/A 00:17:47.568 Firmware Activation Without Reset: N/A 00:17:47.568 Multiple Update Detection Support: N/A 00:17:47.568 Firmware Update Granularity: No Information Provided 00:17:47.568 Per-Namespace SMART Log: No 00:17:47.568 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.568 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:47.568 Command Effects Log Page: Supported 00:17:47.568 Get Log Page Extended Data: Supported 00:17:47.568 Telemetry Log Pages: Not Supported 00:17:47.568 Persistent Event Log Pages: Not Supported 00:17:47.568 Supported Log Pages Log Page: May Support 00:17:47.568 Commands Supported & Effects Log Page: Not Supported 00:17:47.568 Feature Identifiers & Effects Log Page:May Support 00:17:47.568 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.568 Data Area 4 for Telemetry Log: Not Supported 00:17:47.568 Error Log Page Entries Supported: 128 00:17:47.568 Keep Alive: Supported 00:17:47.568 Keep Alive Granularity: 10000 ms 00:17:47.568 00:17:47.568 NVM Command Set Attributes 00:17:47.568 ========================== 00:17:47.568 Submission Queue Entry Size 00:17:47.568 Max: 64 00:17:47.568 Min: 64 00:17:47.569 Completion Queue Entry Size 00:17:47.569 Max: 16 00:17:47.569 Min: 16 00:17:47.569 Number of Namespaces: 32 00:17:47.569 Compare Command: Supported 00:17:47.569 Write Uncorrectable Command: Not Supported 00:17:47.569 Dataset Management Command: Supported 00:17:47.569 Write Zeroes Command: Supported 00:17:47.569 Set Features Save Field: Not Supported 00:17:47.569 Reservations: Supported 00:17:47.569 Timestamp: Not Supported 00:17:47.569 Copy: Supported 00:17:47.569 Volatile Write Cache: Present 00:17:47.569 Atomic Write Unit (Normal): 1 00:17:47.569 Atomic Write Unit (PFail): 1 00:17:47.569 Atomic Compare & Write Unit: 1 00:17:47.569 Fused Compare & Write: Supported 00:17:47.569 Scatter-Gather List 00:17:47.569 SGL Command Set: Supported 00:17:47.569 SGL Keyed: Supported 00:17:47.569 SGL Bit Bucket Descriptor: Not Supported 00:17:47.569 SGL Metadata Pointer: Not Supported 00:17:47.569 Oversized SGL: Not Supported 00:17:47.569 SGL Metadata Address: Not Supported 00:17:47.569 SGL Offset: Supported 00:17:47.569 Transport SGL Data Block: Not Supported 00:17:47.569 Replay Protected Memory Block: Not Supported 00:17:47.569 00:17:47.569 Firmware Slot Information 00:17:47.569 ========================= 00:17:47.569 Active slot: 1 00:17:47.569 Slot 1 Firmware Revision: 25.01 00:17:47.569 00:17:47.569 00:17:47.569 Commands Supported and Effects 00:17:47.569 ============================== 00:17:47.569 Admin Commands 00:17:47.569 -------------- 00:17:47.569 Get Log Page (02h): Supported 00:17:47.569 Identify (06h): Supported 00:17:47.569 Abort (08h): Supported 00:17:47.569 Set Features (09h): Supported 00:17:47.569 Get Features (0Ah): Supported 00:17:47.569 Asynchronous Event Request (0Ch): Supported 00:17:47.569 Keep Alive (18h): Supported 00:17:47.569 I/O Commands 00:17:47.569 ------------ 00:17:47.569 Flush (00h): Supported LBA-Change 00:17:47.569 Write (01h): Supported LBA-Change 00:17:47.569 Read (02h): Supported 00:17:47.569 Compare (05h): Supported 00:17:47.569 Write Zeroes (08h): Supported LBA-Change 00:17:47.569 Dataset Management (09h): Supported LBA-Change 00:17:47.569 Copy (19h): Supported LBA-Change 00:17:47.569 00:17:47.569 Error Log 00:17:47.569 ========= 00:17:47.569 00:17:47.569 Arbitration 00:17:47.569 =========== 00:17:47.569 Arbitration Burst: 1 00:17:47.569 00:17:47.569 Power Management 00:17:47.569 ================ 00:17:47.569 Number of Power States: 1 00:17:47.569 Current Power State: Power State #0 00:17:47.569 Power State #0: 00:17:47.569 Max Power: 0.00 W 00:17:47.569 Non-Operational State: Operational 00:17:47.569 Entry Latency: Not Reported 00:17:47.569 Exit Latency: Not Reported 00:17:47.569 Relative Read Throughput: 0 00:17:47.569 Relative Read Latency: 0 00:17:47.569 Relative Write Throughput: 0 00:17:47.569 Relative Write Latency: 0 00:17:47.569 Idle Power: Not Reported 00:17:47.569 Active Power: Not Reported 00:17:47.569 Non-Operational Permissive Mode: Not Supported 00:17:47.569 00:17:47.569 Health Information 00:17:47.569 ================== 00:17:47.569 Critical Warnings: 00:17:47.569 Available Spare Space: OK 00:17:47.569 Temperature: OK 00:17:47.569 Device Reliability: OK 00:17:47.569 Read Only: No 00:17:47.569 Volatile Memory Backup: OK 00:17:47.569 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.569 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:47.569 Available Spare: 0% 00:17:47.569 Available Spare Threshold: 0% 00:17:47.569 Life Percentage Used:[2024-10-01 13:51:57.634069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634079] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2376750) 00:17:47.569 [2024-10-01 13:51:57.634090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.569 [2024-10-01 13:51:57.634123] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23db2c0, cid 7, qid 0 00:17:47.569 [2024-10-01 13:51:57.634176] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.569 [2024-10-01 13:51:57.634185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.569 [2024-10-01 13:51:57.634190] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23db2c0) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634241] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:47.569 [2024-10-01 13:51:57.634257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da840) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.569 [2024-10-01 13:51:57.634272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23da9c0) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.569 [2024-10-01 13:51:57.634290] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dab40) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.569 [2024-10-01 13:51:57.634301] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.569 [2024-10-01 13:51:57.634318] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634323] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.569 [2024-10-01 13:51:57.634337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.569 [2024-10-01 13:51:57.634366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.569 [2024-10-01 13:51:57.634412] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.569 [2024-10-01 13:51:57.634421] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.569 [2024-10-01 13:51:57.634425] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634430] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.569 [2024-10-01 13:51:57.634459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.569 [2024-10-01 13:51:57.634486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.569 [2024-10-01 13:51:57.634569] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.569 [2024-10-01 13:51:57.634580] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.569 [2024-10-01 13:51:57.634584] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634589] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634595] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:47.569 [2024-10-01 13:51:57.634601] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:47.569 [2024-10-01 13:51:57.634613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634619] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634623] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.569 [2024-10-01 13:51:57.634632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.569 [2024-10-01 13:51:57.634656] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.569 [2024-10-01 13:51:57.634705] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.569 [2024-10-01 13:51:57.634714] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.569 [2024-10-01 13:51:57.634719] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.569 [2024-10-01 13:51:57.634724] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.569 [2024-10-01 13:51:57.634737] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.634743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.634748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.634756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.634779] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.634825] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.634834] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.634838] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.634843] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.634856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.634862] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.634866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.634875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.634897] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.634956] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.634968] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.634972] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.634977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.634991] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.634997] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635001] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635035] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635079] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635092] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635097] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635110] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635116] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635210] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635215] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635219] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635232] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635238] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635243] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635274] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635325] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635347] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635353] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635357] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635388] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635442] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635575] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635580] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635594] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635605] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635637] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635701] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635714] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635724] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635799] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635813] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635819] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635824] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635837] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635852] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635856] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.635865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.635889] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.635950] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.635962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.635966] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635971] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.635985] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635991] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.635995] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.636004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.636029] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.636076] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.636084] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.636089] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.636094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.636106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.636113] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.636117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.570 [2024-10-01 13:51:57.636126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.570 [2024-10-01 13:51:57.636148] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.570 [2024-10-01 13:51:57.636194] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.570 [2024-10-01 13:51:57.636202] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.570 [2024-10-01 13:51:57.636207] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.636211] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.570 [2024-10-01 13:51:57.636224] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.570 [2024-10-01 13:51:57.636230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636235] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.636243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.636266] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.636311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.636319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.636324] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636329] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.636341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.636360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.636382] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.636425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.636434] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.636438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.636456] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636462] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.636475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.636498] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.636544] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.636552] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.636557] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.636574] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636580] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636585] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.636593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.636616] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.636658] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.636666] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.636671] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.636688] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636694] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.636707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.636729] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.636781] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.636789] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.636794] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636799] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.636811] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636817] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636822] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.636830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.636853] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.636898] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.636906] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.636926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636933] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.636948] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636954] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.636959] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.636967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.636992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.637038] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.637047] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.637051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.637069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637075] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637080] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.637088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.637111] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.637157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.637165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.637170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.637187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637193] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637198] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.637206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.637229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.637278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.637286] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.637291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.637308] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637314] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637318] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.637327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.637349] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.637399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.637407] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.637412] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637417] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.637429] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637436] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.637449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.637471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.637516] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.637525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.637529] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637534] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.637547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637553] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637557] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.571 [2024-10-01 13:51:57.637566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.571 [2024-10-01 13:51:57.637588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.571 [2024-10-01 13:51:57.637631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.571 [2024-10-01 13:51:57.637639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.571 [2024-10-01 13:51:57.637643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637648] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.571 [2024-10-01 13:51:57.637661] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637667] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.571 [2024-10-01 13:51:57.637672] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.572 [2024-10-01 13:51:57.637680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.572 [2024-10-01 13:51:57.637703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.572 [2024-10-01 13:51:57.637752] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.572 [2024-10-01 13:51:57.637760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.572 [2024-10-01 13:51:57.637765] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.572 [2024-10-01 13:51:57.637770] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.572 [2024-10-01 13:51:57.637782] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.572 [2024-10-01 13:51:57.637788] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.572 [2024-10-01 13:51:57.637793] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.572 [2024-10-01 13:51:57.637801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.572 [2024-10-01 13:51:57.637824] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.572 [2024-10-01 13:51:57.637873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.572 [2024-10-01 13:51:57.637882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.572 [2024-10-01 13:51:57.637886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.572 [2024-10-01 13:51:57.637891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.572 [2024-10-01 13:51:57.637904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.572 [2024-10-01 13:51:57.641923] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.572 [2024-10-01 13:51:57.641945] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2376750) 00:17:47.572 [2024-10-01 13:51:57.641957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.572 [2024-10-01 13:51:57.641990] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23dacc0, cid 3, qid 0 00:17:47.572 [2024-10-01 13:51:57.642042] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.572 [2024-10-01 13:51:57.642051] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.572 [2024-10-01 13:51:57.642056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.572 [2024-10-01 13:51:57.642060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23dacc0) on tqpair=0x2376750 00:17:47.572 [2024-10-01 13:51:57.642071] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:47.572 0% 00:17:47.572 Data Units Read: 0 00:17:47.572 Data Units Written: 0 00:17:47.572 Host Read Commands: 0 00:17:47.572 Host Write Commands: 0 00:17:47.572 Controller Busy Time: 0 minutes 00:17:47.572 Power Cycles: 0 00:17:47.572 Power On Hours: 0 hours 00:17:47.572 Unsafe Shutdowns: 0 00:17:47.572 Unrecoverable Media Errors: 0 00:17:47.572 Lifetime Error Log Entries: 0 00:17:47.572 Warning Temperature Time: 0 minutes 00:17:47.572 Critical Temperature Time: 0 minutes 00:17:47.572 00:17:47.572 Number of Queues 00:17:47.572 ================ 00:17:47.572 Number of I/O Submission Queues: 127 00:17:47.572 Number of I/O Completion Queues: 127 00:17:47.572 00:17:47.572 Active Namespaces 00:17:47.572 ================= 00:17:47.572 Namespace ID:1 00:17:47.572 Error Recovery Timeout: Unlimited 00:17:47.572 Command Set Identifier: NVM (00h) 00:17:47.572 Deallocate: Supported 00:17:47.572 Deallocated/Unwritten Error: Not Supported 00:17:47.572 Deallocated Read Value: Unknown 00:17:47.572 Deallocate in Write Zeroes: Not Supported 00:17:47.572 Deallocated Guard Field: 0xFFFF 00:17:47.572 Flush: Supported 00:17:47.572 Reservation: Supported 00:17:47.572 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.572 Size (in LBAs): 131072 (0GiB) 00:17:47.572 Capacity (in LBAs): 131072 (0GiB) 00:17:47.572 Utilization (in LBAs): 131072 (0GiB) 00:17:47.572 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:47.572 EUI64: ABCDEF0123456789 00:17:47.572 UUID: 3097fbe6-42d9-4a7f-89ea-7924fc205064 00:17:47.572 Thin Provisioning: Not Supported 00:17:47.572 Per-NS Atomic Units: Yes 00:17:47.572 Atomic Boundary Size (Normal): 0 00:17:47.572 Atomic Boundary Size (PFail): 0 00:17:47.572 Atomic Boundary Offset: 0 00:17:47.572 Maximum Single Source Range Length: 65535 00:17:47.572 Maximum Copy Length: 65535 00:17:47.572 Maximum Source Range Count: 1 00:17:47.572 NGUID/EUI64 Never Reused: No 00:17:47.572 Namespace Write Protected: No 00:17:47.572 Number of LBA Formats: 1 00:17:47.572 Current LBA Format: LBA Format #00 00:17:47.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.572 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:47.572 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:47.572 rmmod nvme_tcp 00:17:47.830 rmmod nvme_fabrics 00:17:47.830 rmmod nvme_keyring 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 74672 ']' 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 74672 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74672 ']' 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74672 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74672 00:17:47.830 killing process with pid 74672 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74672' 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74672 00:17:47.830 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74672 00:17:48.092 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:48.093 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:48.350 00:17:48.350 real 0m3.042s 00:17:48.350 user 0m7.476s 00:17:48.350 sys 0m0.788s 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.350 ************************************ 00:17:48.350 END TEST nvmf_identify 00:17:48.350 ************************************ 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.350 ************************************ 00:17:48.350 START TEST nvmf_perf 00:17:48.350 ************************************ 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:48.350 * Looking for test storage... 00:17:48.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:48.350 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.621 --rc genhtml_branch_coverage=1 00:17:48.621 --rc genhtml_function_coverage=1 00:17:48.621 --rc genhtml_legend=1 00:17:48.621 --rc geninfo_all_blocks=1 00:17:48.621 --rc geninfo_unexecuted_blocks=1 00:17:48.621 00:17:48.621 ' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.621 --rc genhtml_branch_coverage=1 00:17:48.621 --rc genhtml_function_coverage=1 00:17:48.621 --rc genhtml_legend=1 00:17:48.621 --rc geninfo_all_blocks=1 00:17:48.621 --rc geninfo_unexecuted_blocks=1 00:17:48.621 00:17:48.621 ' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.621 --rc genhtml_branch_coverage=1 00:17:48.621 --rc genhtml_function_coverage=1 00:17:48.621 --rc genhtml_legend=1 00:17:48.621 --rc geninfo_all_blocks=1 00:17:48.621 --rc geninfo_unexecuted_blocks=1 00:17:48.621 00:17:48.621 ' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.621 --rc genhtml_branch_coverage=1 00:17:48.621 --rc genhtml_function_coverage=1 00:17:48.621 --rc genhtml_legend=1 00:17:48.621 --rc geninfo_all_blocks=1 00:17:48.621 --rc geninfo_unexecuted_blocks=1 00:17:48.621 00:17:48.621 ' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.621 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:48.622 Cannot find device "nvmf_init_br" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:48.622 Cannot find device "nvmf_init_br2" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:48.622 Cannot find device "nvmf_tgt_br" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.622 Cannot find device "nvmf_tgt_br2" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:48.622 Cannot find device "nvmf_init_br" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:48.622 Cannot find device "nvmf_init_br2" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:48.622 Cannot find device "nvmf_tgt_br" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:48.622 Cannot find device "nvmf_tgt_br2" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:48.622 Cannot find device "nvmf_br" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:48.622 Cannot find device "nvmf_init_if" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:48.622 Cannot find device "nvmf_init_if2" 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:48.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:48.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:48.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:17:48.880 00:17:48.880 --- 10.0.0.3 ping statistics --- 00:17:48.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.880 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:48.880 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:48.880 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:48.880 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:17:48.880 00:17:48.880 --- 10.0.0.4 ping statistics --- 00:17:48.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.880 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:48.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:48.881 00:17:48.881 --- 10.0.0.1 ping statistics --- 00:17:48.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.881 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:48.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:17:48.881 00:17:48.881 --- 10.0.0.2 ping statistics --- 00:17:48.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.881 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=74928 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 74928 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74928 ']' 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.881 13:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:49.138 [2024-10-01 13:51:59.080105] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:49.138 [2024-10-01 13:51:59.080257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.138 [2024-10-01 13:51:59.226752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.395 [2024-10-01 13:51:59.347395] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.395 [2024-10-01 13:51:59.347460] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.395 [2024-10-01 13:51:59.347473] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.395 [2024-10-01 13:51:59.347482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.395 [2024-10-01 13:51:59.347491] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.395 [2024-10-01 13:51:59.347573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.395 [2024-10-01 13:51:59.347791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.395 [2024-10-01 13:51:59.348398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.395 [2024-10-01 13:51:59.348434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.395 [2024-10-01 13:51:59.404515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:50.038 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:50.603 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:50.603 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:50.860 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:50.860 13:52:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:51.118 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:51.118 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:51.118 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:51.118 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:51.118 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.376 [2024-10-01 13:52:01.316470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.376 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.633 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:51.633 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.890 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:51.890 13:52:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:52.148 13:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:52.406 [2024-10-01 13:52:02.343503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.406 13:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:52.663 13:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:52.663 13:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:52.663 13:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:52.663 13:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:53.595 Initializing NVMe Controllers 00:17:53.595 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:53.595 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:53.595 Initialization complete. Launching workers. 00:17:53.595 ======================================================== 00:17:53.595 Latency(us) 00:17:53.595 Device Information : IOPS MiB/s Average min max 00:17:53.595 PCIE (0000:00:10.0) NSID 1 from core 0: 23552.00 92.00 1358.71 378.50 7319.83 00:17:53.595 ======================================================== 00:17:53.595 Total : 23552.00 92.00 1358.71 378.50 7319.83 00:17:53.595 00:17:53.595 13:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:54.966 Initializing NVMe Controllers 00:17:54.966 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:54.966 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:54.966 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:54.966 Initialization complete. Launching workers. 00:17:54.966 ======================================================== 00:17:54.966 Latency(us) 00:17:54.966 Device Information : IOPS MiB/s Average min max 00:17:54.966 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3334.28 13.02 299.51 111.90 7261.19 00:17:54.966 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.74 0.50 7951.94 1366.60 12012.69 00:17:54.966 ======================================================== 00:17:54.966 Total : 3461.03 13.52 579.75 111.90 12012.69 00:17:54.966 00:17:54.966 13:52:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:56.338 Initializing NVMe Controllers 00:17:56.338 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.338 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.338 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:56.338 Initialization complete. Launching workers. 00:17:56.338 ======================================================== 00:17:56.338 Latency(us) 00:17:56.338 Device Information : IOPS MiB/s Average min max 00:17:56.338 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8387.61 32.76 3816.47 650.31 9075.08 00:17:56.338 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3925.04 15.33 8183.74 4452.23 17002.34 00:17:56.338 ======================================================== 00:17:56.338 Total : 12312.65 48.10 5208.67 650.31 17002.34 00:17:56.338 00:17:56.338 13:52:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:56.338 13:52:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:58.864 Initializing NVMe Controllers 00:17:58.864 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:58.864 Controller IO queue size 128, less than required. 00:17:58.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:58.865 Controller IO queue size 128, less than required. 00:17:58.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:58.865 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:58.865 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:58.865 Initialization complete. Launching workers. 00:17:58.865 ======================================================== 00:17:58.865 Latency(us) 00:17:58.865 Device Information : IOPS MiB/s Average min max 00:17:58.865 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1610.96 402.74 80537.77 39774.25 120010.92 00:17:58.865 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 663.98 166.00 206989.32 66937.80 375394.91 00:17:58.865 ======================================================== 00:17:58.865 Total : 2274.94 568.73 117444.95 39774.25 375394.91 00:17:58.865 00:17:59.139 13:52:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:59.397 Initializing NVMe Controllers 00:17:59.397 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.397 Controller IO queue size 128, less than required. 00:17:59.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.397 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:59.397 Controller IO queue size 128, less than required. 00:17:59.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.397 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:59.397 WARNING: Some requested NVMe devices were skipped 00:17:59.397 No valid NVMe controllers or AIO or URING devices found 00:17:59.397 13:52:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:01.957 Initializing NVMe Controllers 00:18:01.957 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.957 Controller IO queue size 128, less than required. 00:18:01.957 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.957 Controller IO queue size 128, less than required. 00:18:01.957 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.957 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.957 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:01.957 Initialization complete. Launching workers. 00:18:01.957 00:18:01.957 ==================== 00:18:01.957 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:01.957 TCP transport: 00:18:01.957 polls: 8448 00:18:01.957 idle_polls: 4714 00:18:01.957 sock_completions: 3734 00:18:01.957 nvme_completions: 6149 00:18:01.957 submitted_requests: 9192 00:18:01.957 queued_requests: 1 00:18:01.957 00:18:01.957 ==================== 00:18:01.957 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:01.957 TCP transport: 00:18:01.957 polls: 8563 00:18:01.957 idle_polls: 5238 00:18:01.957 sock_completions: 3325 00:18:01.957 nvme_completions: 5657 00:18:01.957 submitted_requests: 8476 00:18:01.957 queued_requests: 1 00:18:01.957 ======================================================== 00:18:01.957 Latency(us) 00:18:01.957 Device Information : IOPS MiB/s Average min max 00:18:01.957 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1536.79 384.20 84825.20 48164.35 131267.59 00:18:01.957 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1413.81 353.45 90482.46 39252.09 155904.90 00:18:01.957 ======================================================== 00:18:01.957 Total : 2950.60 737.65 87535.93 39252.09 155904.90 00:18:01.957 00:18:01.957 13:52:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:01.957 13:52:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.215 rmmod nvme_tcp 00:18:02.215 rmmod nvme_fabrics 00:18:02.215 rmmod nvme_keyring 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 74928 ']' 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 74928 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74928 ']' 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74928 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74928 00:18:02.215 killing process with pid 74928 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74928' 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74928 00:18:02.215 13:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74928 00:18:03.147 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:03.147 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:03.147 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:03.147 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:03.148 ************************************ 00:18:03.148 END TEST nvmf_perf 00:18:03.148 00:18:03.148 real 0m14.938s 00:18:03.148 user 0m53.642s 00:18:03.148 sys 0m4.013s 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.148 13:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:03.148 ************************************ 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 ************************************ 00:18:03.406 START TEST nvmf_fio_host 00:18:03.406 ************************************ 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:03.406 * Looking for test storage... 00:18:03.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.406 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.407 --rc genhtml_branch_coverage=1 00:18:03.407 --rc genhtml_function_coverage=1 00:18:03.407 --rc genhtml_legend=1 00:18:03.407 --rc geninfo_all_blocks=1 00:18:03.407 --rc geninfo_unexecuted_blocks=1 00:18:03.407 00:18:03.407 ' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.407 --rc genhtml_branch_coverage=1 00:18:03.407 --rc genhtml_function_coverage=1 00:18:03.407 --rc genhtml_legend=1 00:18:03.407 --rc geninfo_all_blocks=1 00:18:03.407 --rc geninfo_unexecuted_blocks=1 00:18:03.407 00:18:03.407 ' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.407 --rc genhtml_branch_coverage=1 00:18:03.407 --rc genhtml_function_coverage=1 00:18:03.407 --rc genhtml_legend=1 00:18:03.407 --rc geninfo_all_blocks=1 00:18:03.407 --rc geninfo_unexecuted_blocks=1 00:18:03.407 00:18:03.407 ' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.407 --rc genhtml_branch_coverage=1 00:18:03.407 --rc genhtml_function_coverage=1 00:18:03.407 --rc genhtml_legend=1 00:18:03.407 --rc geninfo_all_blocks=1 00:18:03.407 --rc geninfo_unexecuted_blocks=1 00:18:03.407 00:18:03.407 ' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.407 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.407 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.408 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:03.666 Cannot find device "nvmf_init_br" 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:03.666 Cannot find device "nvmf_init_br2" 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:03.666 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:03.666 Cannot find device "nvmf_tgt_br" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.667 Cannot find device "nvmf_tgt_br2" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:03.667 Cannot find device "nvmf_init_br" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:03.667 Cannot find device "nvmf_init_br2" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:03.667 Cannot find device "nvmf_tgt_br" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:03.667 Cannot find device "nvmf_tgt_br2" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:03.667 Cannot find device "nvmf_br" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:03.667 Cannot find device "nvmf_init_if" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:03.667 Cannot find device "nvmf_init_if2" 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:03.667 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:03.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:03.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:03.925 00:18:03.925 --- 10.0.0.3 ping statistics --- 00:18:03.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.925 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:03.925 13:52:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:03.925 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:03.925 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:18:03.925 00:18:03.925 --- 10.0.0.4 ping statistics --- 00:18:03.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.925 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:03.925 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:03.925 00:18:03.925 --- 10.0.0.1 ping statistics --- 00:18:03.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.925 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:03.925 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:03.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:18:03.925 00:18:03.925 --- 10.0.0.2 ping statistics --- 00:18:03.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.925 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:03.925 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75395 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75395 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 75395 ']' 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.926 13:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.183 [2024-10-01 13:52:14.106130] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:04.183 [2024-10-01 13:52:14.106227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.183 [2024-10-01 13:52:14.248336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.440 [2024-10-01 13:52:14.387261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.440 [2024-10-01 13:52:14.387367] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.440 [2024-10-01 13:52:14.387382] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.440 [2024-10-01 13:52:14.387394] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.440 [2024-10-01 13:52:14.387402] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.440 [2024-10-01 13:52:14.387564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.440 [2024-10-01 13:52:14.387723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.440 [2024-10-01 13:52:14.388586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.440 [2024-10-01 13:52:14.388639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.440 [2024-10-01 13:52:14.447556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.373 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.373 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:05.373 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:05.373 [2024-10-01 13:52:15.495258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.373 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:05.373 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.373 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.630 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:05.888 Malloc1 00:18:05.888 13:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.146 13:52:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.404 13:52:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:06.720 [2024-10-01 13:52:16.757431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:06.720 13:52:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:06.978 13:52:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:07.235 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:07.235 fio-3.35 00:18:07.235 Starting 1 thread 00:18:09.758 00:18:09.758 test: (groupid=0, jobs=1): err= 0: pid=75484: Tue Oct 1 13:52:19 2024 00:18:09.758 read: IOPS=8292, BW=32.4MiB/s (34.0MB/s)(65.0MiB/2007msec) 00:18:09.758 slat (usec): min=2, max=352, avg= 2.45, stdev= 3.37 00:18:09.758 clat (usec): min=2608, max=14964, avg=8052.45, stdev=569.03 00:18:09.758 lat (usec): min=2622, max=14967, avg=8054.91, stdev=568.71 00:18:09.758 clat percentiles (usec): 00:18:09.758 | 1.00th=[ 6915], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7635], 00:18:09.758 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:18:09.758 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:18:09.758 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[13698], 99.95th=[14353], 00:18:09.758 | 99.99th=[15008] 00:18:09.758 bw ( KiB/s): min=32640, max=33832, per=99.91%, avg=33142.00, stdev=565.96, samples=4 00:18:09.758 iops : min= 8160, max= 8458, avg=8285.50, stdev=141.49, samples=4 00:18:09.758 write: IOPS=8290, BW=32.4MiB/s (34.0MB/s)(65.0MiB/2007msec); 0 zone resets 00:18:09.758 slat (usec): min=2, max=237, avg= 2.60, stdev= 2.07 00:18:09.758 clat (usec): min=2420, max=14268, avg=7326.09, stdev=500.26 00:18:09.758 lat (usec): min=2434, max=14271, avg=7328.70, stdev=500.07 00:18:09.758 clat percentiles (usec): 00:18:09.758 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 6980], 00:18:09.758 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7439], 00:18:09.758 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8029], 00:18:09.758 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[11600], 99.95th=[13566], 00:18:09.758 | 99.99th=[14222] 00:18:09.758 bw ( KiB/s): min=32832, max=33416, per=100.00%, avg=33170.00, stdev=290.24, samples=4 00:18:09.758 iops : min= 8208, max= 8354, avg=8292.50, stdev=72.56, samples=4 00:18:09.758 lat (msec) : 4=0.09%, 10=99.70%, 20=0.21% 00:18:09.758 cpu : usr=73.28%, sys=20.34%, ctx=9, majf=0, minf=6 00:18:09.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:09.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:09.758 issued rwts: total=16644,16639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:09.758 00:18:09.758 Run status group 0 (all jobs): 00:18:09.758 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=65.0MiB (68.2MB), run=2007-2007msec 00:18:09.758 WRITE: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=65.0MiB (68.2MB), run=2007-2007msec 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:09.758 13:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:09.758 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:09.758 fio-3.35 00:18:09.758 Starting 1 thread 00:18:12.286 00:18:12.286 test: (groupid=0, jobs=1): err= 0: pid=75528: Tue Oct 1 13:52:22 2024 00:18:12.286 read: IOPS=7472, BW=117MiB/s (122MB/s)(234MiB/2007msec) 00:18:12.286 slat (usec): min=3, max=121, avg= 4.04, stdev= 1.95 00:18:12.286 clat (usec): min=2020, max=20006, avg=9567.91, stdev=2840.69 00:18:12.286 lat (usec): min=2024, max=20011, avg=9571.95, stdev=2840.84 00:18:12.286 clat percentiles (usec): 00:18:12.286 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6980], 00:18:12.286 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10290], 00:18:12.286 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13435], 95.00th=[14746], 00:18:12.286 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19006], 99.95th=[19530], 00:18:12.286 | 99.99th=[20055] 00:18:12.286 bw ( KiB/s): min=53088, max=72160, per=50.39%, avg=60240.00, stdev=8744.24, samples=4 00:18:12.286 iops : min= 3318, max= 4510, avg=3765.00, stdev=546.52, samples=4 00:18:12.286 write: IOPS=4311, BW=67.4MiB/s (70.6MB/s)(123MiB/1829msec); 0 zone resets 00:18:12.286 slat (usec): min=34, max=165, avg=39.67, stdev= 6.40 00:18:12.286 clat (usec): min=3568, max=25203, avg=13397.04, stdev=2747.11 00:18:12.286 lat (usec): min=3605, max=25245, avg=13436.71, stdev=2748.65 00:18:12.286 clat percentiles (usec): 00:18:12.286 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11076], 00:18:12.286 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13042], 60.00th=[13829], 00:18:12.286 | 70.00th=[14877], 80.00th=[15795], 90.00th=[16909], 95.00th=[17957], 00:18:12.286 | 99.00th=[20579], 99.50th=[21365], 99.90th=[24511], 99.95th=[24773], 00:18:12.286 | 99.99th=[25297] 00:18:12.286 bw ( KiB/s): min=54560, max=75424, per=91.07%, avg=62816.00, stdev=9426.52, samples=4 00:18:12.286 iops : min= 3410, max= 4714, avg=3926.00, stdev=589.16, samples=4 00:18:12.286 lat (msec) : 4=0.40%, 10=40.22%, 20=58.83%, 50=0.54% 00:18:12.286 cpu : usr=78.22%, sys=16.40%, ctx=7, majf=0, minf=7 00:18:12.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:12.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.286 issued rwts: total=14997,7885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.286 00:18:12.286 Run status group 0 (all jobs): 00:18:12.286 READ: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=234MiB (246MB), run=2007-2007msec 00:18:12.286 WRITE: bw=67.4MiB/s (70.6MB/s), 67.4MiB/s-67.4MiB/s (70.6MB/s-70.6MB/s), io=123MiB (129MB), run=1829-1829msec 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:12.286 rmmod nvme_tcp 00:18:12.286 rmmod nvme_fabrics 00:18:12.286 rmmod nvme_keyring 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 75395 ']' 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 75395 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 75395 ']' 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 75395 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:12.286 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75395 00:18:12.564 killing process with pid 75395 00:18:12.564 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:12.564 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:12.564 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75395' 00:18:12.564 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 75395 00:18:12.564 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 75395 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:12.822 13:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:13.080 ************************************ 00:18:13.080 END TEST nvmf_fio_host 00:18:13.080 ************************************ 00:18:13.080 00:18:13.080 real 0m9.728s 00:18:13.080 user 0m38.359s 00:18:13.080 sys 0m2.468s 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.080 ************************************ 00:18:13.080 START TEST nvmf_failover 00:18:13.080 ************************************ 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:13.080 * Looking for test storage... 00:18:13.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:18:13.080 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:13.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.339 --rc genhtml_branch_coverage=1 00:18:13.339 --rc genhtml_function_coverage=1 00:18:13.339 --rc genhtml_legend=1 00:18:13.339 --rc geninfo_all_blocks=1 00:18:13.339 --rc geninfo_unexecuted_blocks=1 00:18:13.339 00:18:13.339 ' 00:18:13.339 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:13.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.339 --rc genhtml_branch_coverage=1 00:18:13.339 --rc genhtml_function_coverage=1 00:18:13.339 --rc genhtml_legend=1 00:18:13.339 --rc geninfo_all_blocks=1 00:18:13.339 --rc geninfo_unexecuted_blocks=1 00:18:13.339 00:18:13.339 ' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:13.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.340 --rc genhtml_branch_coverage=1 00:18:13.340 --rc genhtml_function_coverage=1 00:18:13.340 --rc genhtml_legend=1 00:18:13.340 --rc geninfo_all_blocks=1 00:18:13.340 --rc geninfo_unexecuted_blocks=1 00:18:13.340 00:18:13.340 ' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:13.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.340 --rc genhtml_branch_coverage=1 00:18:13.340 --rc genhtml_function_coverage=1 00:18:13.340 --rc genhtml_legend=1 00:18:13.340 --rc geninfo_all_blocks=1 00:18:13.340 --rc geninfo_unexecuted_blocks=1 00:18:13.340 00:18:13.340 ' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f52f68-80e5-4327-8a21-70d63145da24 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=88f52f68-80e5-4327-8a21-70d63145da24 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:13.340 Cannot find device "nvmf_init_br" 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:13.340 Cannot find device "nvmf_init_br2" 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:13.340 Cannot find device "nvmf_tgt_br" 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.340 Cannot find device "nvmf_tgt_br2" 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:13.340 Cannot find device "nvmf_init_br" 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:13.340 Cannot find device "nvmf_init_br2" 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:13.340 Cannot find device "nvmf_tgt_br" 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:13.340 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:13.341 Cannot find device "nvmf_tgt_br2" 00:18:13.341 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:13.341 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:13.341 Cannot find device "nvmf_br" 00:18:13.341 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:13.341 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:13.599 Cannot find device "nvmf_init_if" 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:13.599 Cannot find device "nvmf_init_if2" 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:13.599 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:13.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:18:13.600 00:18:13.600 --- 10.0.0.3 ping statistics --- 00:18:13.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.600 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:13.600 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:13.600 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:18:13.600 00:18:13.600 --- 10.0.0.4 ping statistics --- 00:18:13.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.600 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:18:13.600 00:18:13.600 --- 10.0.0.1 ping statistics --- 00:18:13.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.600 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:13.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:18:13.600 00:18:13.600 --- 10.0.0.2 ping statistics --- 00:18:13.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.600 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:13.600 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=75804 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 75804 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75804 ']' 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.859 13:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:13.859 [2024-10-01 13:52:23.838366] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:13.859 [2024-10-01 13:52:23.838481] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.859 [2024-10-01 13:52:23.978195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:14.117 [2024-10-01 13:52:24.106957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.117 [2024-10-01 13:52:24.107030] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.117 [2024-10-01 13:52:24.107044] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.117 [2024-10-01 13:52:24.107055] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.117 [2024-10-01 13:52:24.107064] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.117 [2024-10-01 13:52:24.107258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.117 [2024-10-01 13:52:24.108133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:14.117 [2024-10-01 13:52:24.108143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.117 [2024-10-01 13:52:24.165372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.682 13:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.682 13:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:18:14.682 13:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:14.682 13:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:14.682 13:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:14.940 13:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.940 13:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:15.208 [2024-10-01 13:52:25.214046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.208 13:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:15.483 Malloc0 00:18:15.483 13:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.740 13:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.998 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:16.256 [2024-10-01 13:52:26.322022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:16.256 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:16.515 [2024-10-01 13:52:26.574533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:16.515 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:16.773 [2024-10-01 13:52:26.903011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:16.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75867 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75867 /var/tmp/bdevperf.sock 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75867 ']' 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:16.773 13:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:18.149 13:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.150 13:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:18:18.150 13:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:18.408 NVMe0n1 00:18:18.408 13:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:18.666 NVMe0n1 00:18:18.666 13:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75895 00:18:18.666 13:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:18.666 13:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.603 13:52:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:20.169 13:52:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:23.491 13:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:23.491 NVMe0n1 00:18:23.491 13:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:23.750 13:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:27.031 13:52:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:27.031 [2024-10-01 13:52:37.072023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:27.031 13:52:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:27.963 13:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:28.529 13:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75895 00:18:33.791 { 00:18:33.791 "results": [ 00:18:33.791 { 00:18:33.791 "job": "NVMe0n1", 00:18:33.791 "core_mask": "0x1", 00:18:33.791 "workload": "verify", 00:18:33.791 "status": "finished", 00:18:33.791 "verify_range": { 00:18:33.791 "start": 0, 00:18:33.791 "length": 16384 00:18:33.791 }, 00:18:33.791 "queue_depth": 128, 00:18:33.791 "io_size": 4096, 00:18:33.791 "runtime": 15.009747, 00:18:33.791 "iops": 8240.911722229563, 00:18:33.791 "mibps": 32.19106141495923, 00:18:33.791 "io_failed": 0, 00:18:33.791 "io_timeout": 0, 00:18:33.791 "avg_latency_us": 15497.875964528303, 00:18:33.791 "min_latency_us": 1608.610909090909, 00:18:33.791 "max_latency_us": 20614.05090909091 00:18:33.791 } 00:18:33.791 ], 00:18:33.791 "core_count": 1 00:18:33.791 } 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75867 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75867 ']' 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75867 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75867 00:18:33.791 killing process with pid 75867 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75867' 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75867 00:18:33.791 13:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75867 00:18:34.393 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:34.393 [2024-10-01 13:52:26.984946] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:34.393 [2024-10-01 13:52:26.985074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75867 ] 00:18:34.393 [2024-10-01 13:52:27.128016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.393 [2024-10-01 13:52:27.283318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.393 [2024-10-01 13:52:27.361257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:34.393 Running I/O for 15 seconds... 00:18:34.393 7184.00 IOPS, 28.06 MiB/s [2024-10-01 13:52:30.070093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.070654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.070976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.070993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.393 [2024-10-01 13:52:30.071966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.071983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.071998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.072015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.072030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.072046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.072062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.393 [2024-10-01 13:52:30.072080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.393 [2024-10-01 13:52:30.072095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.072423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.072984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.394 [2024-10-01 13:52:30.073490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.073966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.394 [2024-10-01 13:52:30.073984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.394 [2024-10-01 13:52:30.074000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.395 [2024-10-01 13:52:30.074302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.395 [2024-10-01 13:52:30.074558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cc770 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.074597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.395 [2024-10-01 13:52:30.074610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.395 [2024-10-01 13:52:30.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:18:34.395 [2024-10-01 13:52:30.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074729] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9cc770 was disconnected and freed. reset controller. 00:18:34.395 [2024-10-01 13:52:30.074879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.395 [2024-10-01 13:52:30.074908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.395 [2024-10-01 13:52:30.074964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.074979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.395 [2024-10-01 13:52:30.074994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.075009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.395 [2024-10-01 13:52:30.075023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.395 [2024-10-01 13:52:30.075038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.076103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.076148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.076540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.076574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.076593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.076717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.076801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.076826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.076845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.076880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.087090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.087291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.087337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.087359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.087396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.087430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.087448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.087466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.087500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.097210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.097361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.097396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.097415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.097451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.097484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.097503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.097521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.097561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.107887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.108065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.108100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.108119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.108155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.108189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.108209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.108227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.108290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.118131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.118292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.118327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.118346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.119296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.119944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.119982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.120003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.120114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.128281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.128440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.128474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.128494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.128530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.128563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.128581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.128598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.128630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.138392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.138569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.138605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.138634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.138672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.138705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.138723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.138740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.138773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.148507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.148666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.148701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.148772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.148810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.148844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.395 [2024-10-01 13:52:30.148862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.395 [2024-10-01 13:52:30.148878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.395 [2024-10-01 13:52:30.148927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.395 [2024-10-01 13:52:30.158619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.395 [2024-10-01 13:52:30.158769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.395 [2024-10-01 13:52:30.158812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.395 [2024-10-01 13:52:30.158831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.395 [2024-10-01 13:52:30.158866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.395 [2024-10-01 13:52:30.158899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.158933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.158951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.158985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.169390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.169756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.169802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.169823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.169900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.169958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.169978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.169994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.170026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.179496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.179634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.179668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.179688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.179724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.179757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.179810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.179828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.180749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.189594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.189728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.189769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.189790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.189825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.189857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.189875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.189892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.189940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.200973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.201123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.201157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.201184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.201220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.201254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.201272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.201288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.201319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.211085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.211229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.211269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.211289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.211325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.211358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.211376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.211393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.211425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.221182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.221368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.221402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.221421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.221456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.221505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.221526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.221542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.221574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.231326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.231464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.231505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.231525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.231561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.231594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.231613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.231629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.231660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.241662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.242037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.242082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.242104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.242179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.242219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.242238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.242255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.242294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.251762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.251898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.251946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.251967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.252035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.252069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.252087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.252104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.252135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.261861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.262008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.262056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.262077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.262113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.262146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.262164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.262180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.262211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.271977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.272126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.272164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.272184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.273408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.273629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.273665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.273684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.273720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.283371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.283541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.283576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.283596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.283632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.396 [2024-10-01 13:52:30.283666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.396 [2024-10-01 13:52:30.283684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.396 [2024-10-01 13:52:30.283742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.396 [2024-10-01 13:52:30.283777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.396 [2024-10-01 13:52:30.293878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.396 [2024-10-01 13:52:30.294057] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.396 [2024-10-01 13:52:30.294101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.396 [2024-10-01 13:52:30.294122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.396 [2024-10-01 13:52:30.294159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.294192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.294210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.294227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.294259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.304541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.305432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.305479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.305501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.305693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.305749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.305771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.305788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.305822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.316075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.316228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.316264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.316285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.316320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.317241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.317279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.317300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.317509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.327322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.327466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.327539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.327561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.327596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.327630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.327647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.327664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.327698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.337947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.338108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.338153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.338174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.338210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.338243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.338262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.338279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.338311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.348628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.349500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.349547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.349569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.349773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.349849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.349873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.349890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.349941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.358737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.358890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.358938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.358960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.358997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.360274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.360313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.360333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.360576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.368851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.369006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.369041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.369060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.369863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.370084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.370119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.370139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.371148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.379227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.379375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.379419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.379441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.379477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.379511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.379529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.379546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.379579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.389777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.389984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.390027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.390049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.390085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.390118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.390136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.390153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.391178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.401013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.401180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.401217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.401236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.401272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.401305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.401323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.401340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.401371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.411514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.411669] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.411718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.411740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.411776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.411809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.411827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.411843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.411875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.422995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.423298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.423343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.423365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.423412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.423447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.423467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.423484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.423518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.433755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.433951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.433989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.434044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.434086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.435051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.435089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.435114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.435352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.445083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.445248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.445284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.397 [2024-10-01 13:52:30.445304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.397 [2024-10-01 13:52:30.445340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.397 [2024-10-01 13:52:30.445373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.397 [2024-10-01 13:52:30.445393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.397 [2024-10-01 13:52:30.445409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.397 [2024-10-01 13:52:30.445442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.397 [2024-10-01 13:52:30.455708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.397 [2024-10-01 13:52:30.455886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.397 [2024-10-01 13:52:30.455934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.455956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.455992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.456025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.456052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.456067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.456100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.466712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.467591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.467639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.467662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.467878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.467945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.468002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.468020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.468055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.476821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.476998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.477034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.477053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.477089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.477121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.477139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.477155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.477187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.486952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.487107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.487142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.487161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.487196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.487229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.487249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.487265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.487304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.497143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.498055] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.498104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.498127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.498328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.498385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.498407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.498423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.498456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.508446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.508806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.508857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.508879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.508938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.508977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.508995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.509012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.509907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.520068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.520283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.520319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.520338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.520375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.520408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.520425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.520442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.520475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.530571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.530745] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.530782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.530802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.530838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.530872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.530890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.530906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.530960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.541183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.542063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.542114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.542137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.542349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.542404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.542425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.542441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.542475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.552731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.552951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.552991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.553012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.553959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.554208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.554246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.554267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.554350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.564224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.564404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.564440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.564460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.564498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.564531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.564549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.564567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.564600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.574854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.575053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.575089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.575109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.575146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.575178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.575197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.575244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.575279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.585634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.586532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.586599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.586621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.586848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.586924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.586947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.586964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.586998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.597085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.597240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.597275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.597294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.597330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.597363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.597381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.597397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.598308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.608265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.398 [2024-10-01 13:52:30.608419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.398 [2024-10-01 13:52:30.608455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.398 [2024-10-01 13:52:30.608475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.398 [2024-10-01 13:52:30.608512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.398 [2024-10-01 13:52:30.608544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.398 [2024-10-01 13:52:30.608562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.398 [2024-10-01 13:52:30.608578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.398 [2024-10-01 13:52:30.608611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.398 [2024-10-01 13:52:30.618727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.618877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.618967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.619011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.619049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.619082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.619100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.619116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.619147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.629490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.630358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.630424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.630446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.630644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.630694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.630714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.630730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.630763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.639597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.639756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.639798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.639827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.641097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.641372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.641423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.641444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.642405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.649730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.650006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.650048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.650070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.650967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.651227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.651265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.651287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.652345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.660740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.661000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.661041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.661062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.661104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.661138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.661157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.661175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.661208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.671723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.671985] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.672024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.672045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.673009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.673259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.673297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.673319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.673402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.683190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.683452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.683490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.683511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.683551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.683586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.683604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.683622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.683694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.694028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.694276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.694316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.694336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.694376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.694410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.694429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.694447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.694491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.705851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.706129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.706169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.706190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.706231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.706265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.706284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.706315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.706350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.716555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.716804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.716842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.716863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.717840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.718127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.718166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.718188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.718280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.727956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.728208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.728247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.728321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.728363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.728397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.728417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.728434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.728468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.738460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.738717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.738755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.738776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.738816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.738849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.738867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.738884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.738936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.749900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.750288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.750338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.750362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.399 [2024-10-01 13:52:30.750421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.399 [2024-10-01 13:52:30.750458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.399 [2024-10-01 13:52:30.750477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.399 [2024-10-01 13:52:30.750495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.399 [2024-10-01 13:52:30.750529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.399 [2024-10-01 13:52:30.760771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.399 [2024-10-01 13:52:30.761053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.399 [2024-10-01 13:52:30.761091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.399 [2024-10-01 13:52:30.761113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.762114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.762376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.762444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.762466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.762593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.772563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.772814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.772853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.772874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.772930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.772988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.773011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.773029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.773064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.783385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.783626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.783664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.783685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.783724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.783760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.783779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.783797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.783829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.794976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.795324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.795373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.795396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.795461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.795501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.795521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.795539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.795573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.805816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.806065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.806104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.806125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.807122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.807362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.807404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.807425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.807528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.817344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.817603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.817641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.817663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.817703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.817737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.817755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.817773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.817807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.828079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.828324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.828362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.828383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.828422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.828455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.828473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.828490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.828523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.838404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.838592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.838630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.838650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.838713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.839965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.840004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.840026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.840942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.848550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.848704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.848740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.848773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.848813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.848846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.848863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.848880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.848926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.859984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.860195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.860233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.860253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.860290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.860324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.860342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.860360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.860392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 7601.50 IOPS, 29.69 MiB/s [2024-10-01 13:52:30.871766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.872123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.872163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.872215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.872261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.872297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.872316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.872363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.872398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.883073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.883275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.883312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.883332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.883379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.884323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.884364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.884385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.884627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.894366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.894550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.894588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.894609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.894646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.894680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.894697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.894714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.894746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.904797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.904970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.905007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.905027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.905064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.905097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.905115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.905131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.905164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.915493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.916412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.916463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.400 [2024-10-01 13:52:30.916485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.400 [2024-10-01 13:52:30.916675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.400 [2024-10-01 13:52:30.916725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.400 [2024-10-01 13:52:30.916745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.400 [2024-10-01 13:52:30.916761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.400 [2024-10-01 13:52:30.916800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.400 [2024-10-01 13:52:30.926943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.400 [2024-10-01 13:52:30.927107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.400 [2024-10-01 13:52:30.927142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:30.927161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:30.927196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:30.927228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:30.927245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:30.927261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:30.928169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:30.938216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:30.938384] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:30.938420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:30.938440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:30.938477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:30.938520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:30.938551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:30.938582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:30.938615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:30.948796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:30.949056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:30.949112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:30.949134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:30.949172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:30.949246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:30.949275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:30.949292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:30.949324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:30.959558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:30.960474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:30.960524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:30.960546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:30.960744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:30.960795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:30.960816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:30.960834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:30.960867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:30.970894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:30.971273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:30.971321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:30.971344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:30.971390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:30.971426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:30.971455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:30.971472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:30.972401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:30.982474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:30.982674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:30.982712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:30.982732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:30.982769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:30.982802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:30.982820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:30.982837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:30.982943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:30.993027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:30.993182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:30.993217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:30.993237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:30.993273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:30.993306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:30.993324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:30.993340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:30.993380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.003695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.004573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.004621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.004644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.004833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.004882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.004902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.004937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.004973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.015186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.015346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.015381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.015400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.015436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.016352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.016392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.016414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.016630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.026246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.026396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.026432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.026493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.026530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.026583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.026603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.026619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.026651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.036636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.036787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.036823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.036841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.036877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.036924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.036946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.036963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.036996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.047282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.048161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.048209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.048232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.048410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.048477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.048500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.048518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.048552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.058692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.058850] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.058885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.058905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.058957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.058991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.059040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.059066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.059988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.070365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.070526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.070576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.070597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.070635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.070668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.070686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.070703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.070735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.080864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.081054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.081089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.401 [2024-10-01 13:52:31.081113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.401 [2024-10-01 13:52:31.081148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.401 [2024-10-01 13:52:31.081181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.401 [2024-10-01 13:52:31.081199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.401 [2024-10-01 13:52:31.081216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.401 [2024-10-01 13:52:31.081248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.401 [2024-10-01 13:52:31.092306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.401 [2024-10-01 13:52:31.092597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.401 [2024-10-01 13:52:31.092641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.092664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.092709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.092744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.092762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.092779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.092811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.102413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.103790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.103836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.103858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.104111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.104170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.104191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.104207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.104240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.112511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.112664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.112699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.112718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.112754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.112787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.112804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.112821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.112853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.123498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.123660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.123701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.123723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.123761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.123803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.123821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.123837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.123869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.134744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.134900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.134949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.134970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.135048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.135979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.136012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.136032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.136250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.145950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.146167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.146209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.146229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.146264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.146297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.146314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.146334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.146366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.157049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.157210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.157251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.157272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.157308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.157341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.157359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.157375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.157409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.168152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.169053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.169097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.169118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.169296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.169352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.169374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.169428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.169464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.179611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.179765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.179799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.179818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.179854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.179887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.179904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.179938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.180842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.191219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.191377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.191418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.191439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.191475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.191509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.191527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.191544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.191577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.201713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.201861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.201902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.201937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.201974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.202007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.202024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.202040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.202073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.212863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.213761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.213807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.213828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.214036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.214093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.214114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.214130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.214164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.224206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.224529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.224572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.224593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.224649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.224686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.224711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.224727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.224759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.235396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.236142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.402 [2024-10-01 13:52:31.236185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.402 [2024-10-01 13:52:31.236218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.402 [2024-10-01 13:52:31.236308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.402 [2024-10-01 13:52:31.236347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.402 [2024-10-01 13:52:31.236366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.402 [2024-10-01 13:52:31.236384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.402 [2024-10-01 13:52:31.236416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.402 [2024-10-01 13:52:31.246908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.402 [2024-10-01 13:52:31.247072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.247111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.247133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.247169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.247249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.247278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.247294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.247332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.258551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.258900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.258956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.258978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.259024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.259061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.259079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.259096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.259129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.270034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.270202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.270249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.270269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.271246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.271487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.271521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.271541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.271620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.281286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.281450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.281490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.281512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.281548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.281581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.281599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.281616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.281689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.292209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.292372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.292409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.292428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.292464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.292498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.292516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.292533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.292565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.302984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.303885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.303945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.303969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.304164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.304225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.304248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.304265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.304298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.314440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.314611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.314647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.314667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.314703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.314736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.314754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.314770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.315694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.325682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.325846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.325882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.325957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.325998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.326031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.326050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.326066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.326101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.336434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.336595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.336632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.336650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.336686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.336719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.336736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.336751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.336783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.347092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.347963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.348011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.348033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.348225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.348288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.348311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.348328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.348360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.358595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.358746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.358788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.358806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.358842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.358875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.358948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.358967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.359868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.369797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.369992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.370028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.370047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.370085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.370117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.370135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.370151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.370183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.380283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.380442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.380477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.380497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.380533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.380565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.380582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.380599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.380631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.391833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.392054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.392091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.392111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.392148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.392181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.392200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.392216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.392248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.402499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.402692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.403 [2024-10-01 13:52:31.402732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.403 [2024-10-01 13:52:31.402752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.403 [2024-10-01 13:52:31.402791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.403 [2024-10-01 13:52:31.402824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.403 [2024-10-01 13:52:31.402842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.403 [2024-10-01 13:52:31.402858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.403 [2024-10-01 13:52:31.403777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.403 [2024-10-01 13:52:31.413688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.403 [2024-10-01 13:52:31.413864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.413901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.413947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.413985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.414019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.414037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.414054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.414087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.424091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.424247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.424282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.424302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.424338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.424371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.424389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.424405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.424436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.434750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.435625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.435674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.435707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.435947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.435998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.436019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.436036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.436069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.446084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.446257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.446292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.446311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.446347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.446380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.446397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.446413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.447340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.457217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.457373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.457409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.457428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.457464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.457498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.457516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.457533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.457565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.467671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.467817] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.467852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.467872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.467907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.467959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.467978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.468029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.468064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.478259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.479138] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.479182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.479203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.479384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.479450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.479474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.479489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.479522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.489774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.489959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.489995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.490014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.490050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.490082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.490100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.490117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.491079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.500410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.500572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.500608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.500627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.501548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.502210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.502249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.502279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.502380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.510513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.510724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.510760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.510779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.511994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.512859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.512898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.512946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.513088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.520679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.520816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.520851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.520871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.521784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.522062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.522101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.522122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.522216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.531894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.532046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.532080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.532100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.532134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.532179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.532199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.532215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.532247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.542333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.542472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.542507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.542526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.542585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.542654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.542674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.542690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.542722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.553329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.554194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.554243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.554264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.554442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.554491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.554510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.554526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.554572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.564789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.404 [2024-10-01 13:52:31.564957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.404 [2024-10-01 13:52:31.564992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.404 [2024-10-01 13:52:31.565011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.404 [2024-10-01 13:52:31.565048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.404 [2024-10-01 13:52:31.565081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.404 [2024-10-01 13:52:31.565100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.404 [2024-10-01 13:52:31.565116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.404 [2024-10-01 13:52:31.566024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.404 [2024-10-01 13:52:31.576104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.576268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.576303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.576322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.576358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.576391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.576409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.576425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.576496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.586484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.586650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.586686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.586705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.586741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.586773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.586791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.586807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.586838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.597199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.598092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.598144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.598167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.598345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.598411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.598435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.598451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.598484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.608660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.608811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.608847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.608865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.608899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.608950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.608970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.608986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.609879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.619882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.620059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.620094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.620154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.620192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.620226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.620243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.620260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.620291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.630291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.630472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.630508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.630527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.630579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.630614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.630632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.630648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.630680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.640985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.641869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.641932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.641957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.642143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.642201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.642221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.642238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.642272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.652464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.652644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.652680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.652700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.652737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.653672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.653753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.653775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.654012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.663696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.663856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.663892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.663926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.663973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.664006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.664024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.664041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.664072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.674131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.674294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.674330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.674349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.674384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.674417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.674436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.674453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.674485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.684779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.685659] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.685708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.685731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.685950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.686018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.686042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.686059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.686093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.696297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.696471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.696508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.696527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.696563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.696597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.696615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.696631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.697558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.707738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.707909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.707959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.707979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.708016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.708049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.405 [2024-10-01 13:52:31.708067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.405 [2024-10-01 13:52:31.708084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.405 [2024-10-01 13:52:31.708116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.405 [2024-10-01 13:52:31.718292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.405 [2024-10-01 13:52:31.718461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.405 [2024-10-01 13:52:31.718497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.405 [2024-10-01 13:52:31.718515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.405 [2024-10-01 13:52:31.718564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.405 [2024-10-01 13:52:31.718600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.718619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.718637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.718668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.729056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.729939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.729989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.730051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.730235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.730285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.730306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.730322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.730354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.740504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.740660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.740695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.740714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.740750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.740783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.740801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.740817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.741737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.751704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.751867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.751903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.751940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.751979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.752013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.752032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.752049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.752081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.762230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.762401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.762437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.762457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.762493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.762526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.762573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.762633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.762669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.773013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.773927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.773975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.773997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.774192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.774253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.774273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.774288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.774322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.784474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.784636] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.784671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.784691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.784727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.784761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.784779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.784795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.785731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.795754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.795935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.795971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.795990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.796026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.796060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.796077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.796094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.796126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.806219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.806458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.806495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.806515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.806566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.806602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.806621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.806637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.806670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.816830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.817710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.817759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.817781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.817998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.818051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.818071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.818087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.818121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.828266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.828431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.828467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.828486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.828521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.829452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.829492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.829514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.829723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.839392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.839556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.839592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.839611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.839690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.839725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.839743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.839759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.839803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.849785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.849956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.849992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.850011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.850048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.850080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.850098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.850114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.850146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.860519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.861403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.861451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.861473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.861667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.861717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.861737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.861754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.861787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 7961.00 IOPS, 31.10 MiB/s [2024-10-01 13:52:31.871923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.872080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.872115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.872134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.872170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.873108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.406 [2024-10-01 13:52:31.873148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.406 [2024-10-01 13:52:31.873203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.406 [2024-10-01 13:52:31.873413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.406 [2024-10-01 13:52:31.883084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.406 [2024-10-01 13:52:31.883236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.406 [2024-10-01 13:52:31.883271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.406 [2024-10-01 13:52:31.883290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.406 [2024-10-01 13:52:31.883325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.406 [2024-10-01 13:52:31.883357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.883374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.883390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.883422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.893732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.893898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.893948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.893969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.894006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.894039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.894057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.894074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.894106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.904501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.904656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.904691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.904710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.905474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.905708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.905747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.905767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.905810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.914611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.914759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.914844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.914867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.914904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.914956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.914981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.914996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.916239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.924714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.924869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.924903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.924936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.924974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.925007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.925024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.925040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.925071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.935599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.935752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.935787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.935805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.935841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.935874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.935893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.935937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.935972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.946079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.946225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.946260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.946279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.946315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.947286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.947326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.947347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.947543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.957210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.957352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.957387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.957405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.957440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.957473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.957490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.957505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.957537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.967567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.967708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.967743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.967762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.967797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.967830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.967848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.967863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.967895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.978297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.979217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.979267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.979290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.979475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.979525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.979546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.979562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.979596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:31.989800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:31.989998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:31.990034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:31.990060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:31.990097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:31.991034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:31.991072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:31.991093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:31.991288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:32.000964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:32.001109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:32.001144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:32.001163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:32.001199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:32.001232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:32.001249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:32.001266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:32.001298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:32.011390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:32.011545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:32.011581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:32.011600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:32.011636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:32.011669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:32.011687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:32.011703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:32.011734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:32.022062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:32.022953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:32.023007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:32.023073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:32.023276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:32.023328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:32.023348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:32.023364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:32.023397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:32.033452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:32.033594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:32.033628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:32.033646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:32.033681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.407 [2024-10-01 13:52:32.033713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.407 [2024-10-01 13:52:32.033730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.407 [2024-10-01 13:52:32.033747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.407 [2024-10-01 13:52:32.034677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.407 [2024-10-01 13:52:32.044552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.407 [2024-10-01 13:52:32.044707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.407 [2024-10-01 13:52:32.044743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.407 [2024-10-01 13:52:32.044763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.407 [2024-10-01 13:52:32.044798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.044831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.044850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.044866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.044897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.054947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.055102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.055138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.055158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.055194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.055227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.055286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.055304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.055337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.066310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.066641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.066694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.066716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.066762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.066807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.066825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.066841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.066873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.076960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.077123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.077157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.077176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.077211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.077245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.077263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.077279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.078190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.088043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.088193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.088229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.088248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.088284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.088317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.088335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.088352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.088384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.098452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.098672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.098708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.098728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.098764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.098797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.098826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.098842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.098874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.109041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.109895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.109955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.109978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.110163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.110236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.110262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.110278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.110315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.120418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.120588] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.120623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.120642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.120679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.121615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.121646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.121664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.121863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.131490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.132252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.132301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.132324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.132475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.132518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.132538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.132554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.132588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.143049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.143206] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.143241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.143260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.143306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.143339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.143356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.143373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.143405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.153817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.154720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.154768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.154790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.155002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.155053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.155074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.155091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.155125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.165310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.165474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.165509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.165528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.165564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.166478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.166517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.166581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.166782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.176369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.176527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.176562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.176581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.176616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.176648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.176666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.176682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.176714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.186839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.187003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.187039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.408 [2024-10-01 13:52:32.187057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.408 [2024-10-01 13:52:32.187093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.408 [2024-10-01 13:52:32.187126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.408 [2024-10-01 13:52:32.187144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.408 [2024-10-01 13:52:32.187160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.408 [2024-10-01 13:52:32.187191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.408 [2024-10-01 13:52:32.197394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.408 [2024-10-01 13:52:32.198269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.408 [2024-10-01 13:52:32.198317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.198340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.198526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.198608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.198633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.198649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.198683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.208701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.208844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.208933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.208956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.208994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.209896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.209948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.209970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.210162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.219801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.219963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.220007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.220026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.220061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.220094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.220112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.220128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.220160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.230189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.230349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.230385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.230404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.230440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.230471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.230489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.230506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.230550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.240741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.241602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.241650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.241672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.241854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.241991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.242015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.242032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.242066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.252160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.252303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.252337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.252356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.252391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.252422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.252440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.252456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.253367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.263227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.263378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.263414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.263433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.263468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.263500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.263519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.263536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.263567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.273629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.273772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.273819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.273838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.273874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.273906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.273940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.273957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.274025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.284336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.285215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.285264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.285287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.285486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.285547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.285570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.285587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.285620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.295802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.295986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.296022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.296042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.296078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.297012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.297050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.297071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.297270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.307054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.307208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.307242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.307261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.307297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.307331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.307349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.307366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.307398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.318130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.318287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.318327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.318385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.318424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.318459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.318477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.318492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.318525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.328901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.329784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.329829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.329851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.330065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.330123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.330145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.330161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.330195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.339025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.339168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.339208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.339229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.339264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.339298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.339316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.339332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.339364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.349132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.349279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.349320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.349341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.409 [2024-10-01 13:52:32.349376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.409 [2024-10-01 13:52:32.349408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.409 [2024-10-01 13:52:32.349474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.409 [2024-10-01 13:52:32.349494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.409 [2024-10-01 13:52:32.349528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.409 [2024-10-01 13:52:32.360070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.409 [2024-10-01 13:52:32.360226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.409 [2024-10-01 13:52:32.360261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.409 [2024-10-01 13:52:32.360281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.360317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.360349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.360367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.360384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.360415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.370612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.370763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.370804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.370825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.370862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.370895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.370938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.370958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.371853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.382226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.382381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.382416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.382435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.382471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.382505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.382524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.382567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.382603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.393648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.393840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.393876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.393896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.393946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.393998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.394021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.394037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.394070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.405434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.405765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.405809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.405831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.405877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.405928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.405960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.405978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.406012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.416665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.416849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.416884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.416904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.417864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.418112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.418150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.418171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.418253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.427844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.428037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.428074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.428094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.428172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.428206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.428224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.428251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.428283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.438384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.438811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.438861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.438884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.438978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.439032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.439053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.439079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.439113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.449696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.450756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.450812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.450835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.451105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.451161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.451184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.451202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.451236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.459822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.459998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.460033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.460053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.461301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.461575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.461615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.461686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.462633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.469945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.470112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.470147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.470166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.470202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.470245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.470263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.470280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.470312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.480753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.480945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.480982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.481002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.481039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.481072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.481090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.481107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.481138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.491282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.491467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.491502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.491522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.491559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.492498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.492539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.492560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.492764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.502881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.503076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.503140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.503162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.503199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.503232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.503250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.503267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.503299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.513390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.513561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.513597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.410 [2024-10-01 13:52:32.513616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.410 [2024-10-01 13:52:32.513653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.410 [2024-10-01 13:52:32.513688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.410 [2024-10-01 13:52:32.513706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.410 [2024-10-01 13:52:32.513723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.410 [2024-10-01 13:52:32.513755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.410 [2024-10-01 13:52:32.523509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.410 [2024-10-01 13:52:32.523676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.410 [2024-10-01 13:52:32.523709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.523729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.524975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.525176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.525210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.525230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.525265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.533648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.533802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.533837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.533863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.533898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.533978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.533998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.534052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.534089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.544283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.544457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.544492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.544511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.544546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.544580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.544599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.544614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.544647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.554988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.555853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.555901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.555945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.556126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.556175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.556195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.556212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.556247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.566467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.566637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.566671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.566700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.566734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.567647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.567687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.567709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.567979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.577947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.578108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.578144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.578163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.578199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.578232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.578250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.578267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.578299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.588469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.588623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.588657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.588676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.588710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.588742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.588761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.588777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.588810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.599220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.600094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.600141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.600163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.600340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.600414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.600437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.600458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.600492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.610686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.610838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.610873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.610945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.610984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.611897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.611949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.611982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.612193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.621899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.622069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.622104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.622124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.622161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.622194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.622212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.622227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.622259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.632422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.632586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.632621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.632640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.632675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.632707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.632726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.632742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.632774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.643238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.644127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.644176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.644198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.644385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.644434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.644484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.644502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.644537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.654764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.654950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.654987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.655006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.655043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.655968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.656013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.656034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.411 [2024-10-01 13:52:32.656249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.411 [2024-10-01 13:52:32.665981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.411 [2024-10-01 13:52:32.666121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.411 [2024-10-01 13:52:32.666155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.411 [2024-10-01 13:52:32.666174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.411 [2024-10-01 13:52:32.666209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.411 [2024-10-01 13:52:32.666243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.411 [2024-10-01 13:52:32.666260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.411 [2024-10-01 13:52:32.666277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.666308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.676485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.676633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.676667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.676686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.676721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.676755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.676772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.676789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.676821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.687150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.688109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.688159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.688181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.688395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.688445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.688465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.688482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.688515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.698604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.698767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.698803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.698822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.698859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.699788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.699827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.699849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.700069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.709749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.709943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.709979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.710009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.710047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.710080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.710098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.710115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.710148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.720264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.720454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.720492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.720512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.720581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.720616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.720634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.720650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.720682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.731684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.732039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.732086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.732109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.732157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.732194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.732213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.732229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.732263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.743176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.743379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.743417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.743437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.744373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.744612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.744647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.744667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.744749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.754474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.754666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.754703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.754724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.754762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.754796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.754814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.754874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.754927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.765163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.765361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.765398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.765418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.765456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.765489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.765507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.765523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.765555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.775848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.776744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.776794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.776817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.777038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.777089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.777109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.777125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.777159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.787262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.787443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.787481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.787501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.787538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.788467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.788507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.788529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.788745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.798511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.798720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.798795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.798817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.798855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.798888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.798905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.798945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.798981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.809112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.809277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.809313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.809341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.809378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.809411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.809429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.809445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.809476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.819837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.820743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.820792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.820814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.821019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.821068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.412 [2024-10-01 13:52:32.821089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.412 [2024-10-01 13:52:32.821106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.412 [2024-10-01 13:52:32.821140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.412 [2024-10-01 13:52:32.831370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.412 [2024-10-01 13:52:32.831560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.412 [2024-10-01 13:52:32.831595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.412 [2024-10-01 13:52:32.831614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.412 [2024-10-01 13:52:32.831651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.412 [2024-10-01 13:52:32.832616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.832657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.832678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.832907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.842586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.842746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.842785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.842805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.842841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.842875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.842901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.842932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.842969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.853236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.853415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.853451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.853471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.853508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.853541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.853560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.853577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.853608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.863958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.864822] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.864871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.864903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 8140.75 IOPS, 31.80 MiB/s [2024-10-01 13:52:32.866782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.868021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.868061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.868082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.868987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.875403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.875550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.875585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.875603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.875638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.875671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.875689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.875704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.876616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.886714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.886893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.886947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.886969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.887007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.887040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.887058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.887074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.887106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.897303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.897513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.897550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.897569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.897608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.897640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.897659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.897676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.897709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.908195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.909112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.909160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.909229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.909436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.909484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.909504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.909521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.909554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.918330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.919736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.919792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.919815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.920056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.921178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.921226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.921248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.921556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.928444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.929469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.929536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.929566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.929775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.930887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.930946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.930967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.931660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.938746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.938923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.938971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.939004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.939063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.939101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.939162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.939180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.939214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.949458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.949704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.949744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.949763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.950753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.951038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.951079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.951099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.952395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.960860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.961042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.961095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.961123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.961163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.961198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.961216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.961233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.961265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.971877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.972116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.972158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.972178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.972218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.972252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.972270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.972287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.972319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.983650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.983862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.983907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.983947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.983987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.984021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.413 [2024-10-01 13:52:32.984040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.413 [2024-10-01 13:52:32.984057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.413 [2024-10-01 13:52:32.984104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.413 [2024-10-01 13:52:32.995029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.413 [2024-10-01 13:52:32.995230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.413 [2024-10-01 13:52:32.995271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.413 [2024-10-01 13:52:32.995292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.413 [2024-10-01 13:52:32.995331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.413 [2024-10-01 13:52:32.996325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:32.996369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:32.996391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:32.996629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.006686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.006886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.006945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.006968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.007013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.007067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.007099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.007121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.007386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.017134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.017354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.017396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.017416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.017499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.017535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.017553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.017569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.017602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.028657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.028865] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.028935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.028974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.029018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.029052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.029070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.029087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.029121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.039347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.040532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.040599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.040630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.040852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.041010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.041037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.041055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.042366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.050528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.050733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.050774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.050794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.050838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.050891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.050940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.051008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.051298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.061067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.061277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.061318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.061339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.061378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.061412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.061430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.061446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.061478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.072520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.072721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.072759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.072780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.072848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.072891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.072925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.072946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.072987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.083137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.083317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.083367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.083399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.084394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.084646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.084687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.084708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.084861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.094261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.094507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.094572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.094607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.094660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.094705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.094725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.094741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.095027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.105118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.105292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.105329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.105360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.105412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.105449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.105468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.105484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.105516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.116531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.116722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.116764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.116798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.116852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.116892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.116926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.116947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.116982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.127013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.127210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.127260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.127288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.128379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.128678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.128717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.128738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.128871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.138414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.138629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.414 [2024-10-01 13:52:33.138681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.414 [2024-10-01 13:52:33.138715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.414 [2024-10-01 13:52:33.139057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.414 [2024-10-01 13:52:33.139232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.414 [2024-10-01 13:52:33.139269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.414 [2024-10-01 13:52:33.139299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.414 [2024-10-01 13:52:33.139438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.414 [2024-10-01 13:52:33.148716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.414 [2024-10-01 13:52:33.148942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.148982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.149012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.149068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.149119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.149141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.149160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.149210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.159436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.160340] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.160391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.160414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.160595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.160645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.160666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.160682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.160755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.169560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.169721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.169758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.169777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.171054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.171296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.171340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.171360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.171423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.179669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.180613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.180663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.180686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.180883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.181879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.181931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.181957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.182571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.189780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.189941] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.189985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.190005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.190041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.190074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.190091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.190108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.190140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.200282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.200433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.200469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.200526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.200569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.201484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.201524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.201544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.201769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.211511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.211663] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.211698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.211718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.211753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.211785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.211802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.211818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.211850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.221930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.222099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.222136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.222156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.222192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.222227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.222245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.222262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.222294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.232571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.233452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.233500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.233522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.233700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.233768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.233828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.233846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.233881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.243898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.244062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.244098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.244117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.244152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.244184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.244202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.244218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.245130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.255050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.255213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.255248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.255268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.255305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.255339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.255357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.255373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.255405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.265402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.265555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.265590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.265609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.265645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.265689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.265710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.265726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.265758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.276688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.277003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.277044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.277064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.277122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.277161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.277179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.277196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.277227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.287351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.287504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.287539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.287558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.287594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.287626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.287644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.287660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.288569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.298506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.298682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.415 [2024-10-01 13:52:33.298717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.415 [2024-10-01 13:52:33.298737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.415 [2024-10-01 13:52:33.298774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.415 [2024-10-01 13:52:33.298806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.415 [2024-10-01 13:52:33.298824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.415 [2024-10-01 13:52:33.298840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.415 [2024-10-01 13:52:33.298873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.415 [2024-10-01 13:52:33.308946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.415 [2024-10-01 13:52:33.309103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.309141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.309162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.309239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.309274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.309294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.309311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.309344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.319571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.320440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.320491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.320514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.320693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.320754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.320776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.320793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.320825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.330974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.331142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.331177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.331196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.331232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.331266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.331284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.331300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.332218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.342130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.342304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.342339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.342358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.342394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.342428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.342445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.342500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.342550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.352546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.352707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.352743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.352763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.352799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.352831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.352850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.352865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.352904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.363264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.364155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.364204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.364227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.364981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.365052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.365075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.365092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.366295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.374777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.374944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.374980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.375001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.375037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.375070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.375088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.375105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.376028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.386215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.386415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.386452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.386471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.386509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.386588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.386612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.386629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.386662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.396948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.397124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.397160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.397180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.397238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.397276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.397294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.397311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.397354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.407948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.408818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.408866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.408889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.409116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.409167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.409189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.409206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.409240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.418069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.418223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.418259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.418278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.418316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.419622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.419664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.419686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.419971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.428180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.428358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.428395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.428413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.428450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.428504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.428527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.428544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.428577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.439613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.440521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.440572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.440595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.441464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.442771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.442815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.442837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.443706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.449746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.449887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.449942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.449965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.450006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.450045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.450063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.450079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.416 [2024-10-01 13:52:33.450152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.416 [2024-10-01 13:52:33.460287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.416 [2024-10-01 13:52:33.460482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.416 [2024-10-01 13:52:33.460519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.416 [2024-10-01 13:52:33.460539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.416 [2024-10-01 13:52:33.460580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.416 [2024-10-01 13:52:33.460617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.416 [2024-10-01 13:52:33.460635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.416 [2024-10-01 13:52:33.460652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.460689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.470431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.471514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.471575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.471598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.471823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.471930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.471954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.471971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.472024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.481303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.481488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.481525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.481545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.482472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.483158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.483199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.483222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.483321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.491881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.492056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.492093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.492151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.492192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.492230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.492248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.492264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.492300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.502021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.502208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.502244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.502263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.502302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.502339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.502357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.502374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.502417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.513397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.513571] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.513607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.513627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.513668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.513705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.513723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.513740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.513784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.525664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.526020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.526059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.526080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.526131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.526172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.526229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.526255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.526293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.536998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.537196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.537232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.537252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.537293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.537330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.537349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.537367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.538327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.547728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.548800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.548846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.548868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.549505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.549643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.549682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.549703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.549743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.557843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.558016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.558052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.558071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.558111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.558149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.558167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.558183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.558219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.568556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.568734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.568771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.568790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.568829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.568866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.568884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.568900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.569842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.578690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.579634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.579682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.579705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.579902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.580939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.580978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.581000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.581606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.588820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.588986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.589023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.589042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.589796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.590021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.590058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.590078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.590125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.598944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.599101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.599137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.599157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.417 [2024-10-01 13:52:33.599240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.417 [2024-10-01 13:52:33.599279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.417 [2024-10-01 13:52:33.599297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.417 [2024-10-01 13:52:33.599312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.417 [2024-10-01 13:52:33.599348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.417 [2024-10-01 13:52:33.609633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.417 [2024-10-01 13:52:33.609807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.417 [2024-10-01 13:52:33.609844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.417 [2024-10-01 13:52:33.609863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.609903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.609960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.609980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.609996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.610032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.619763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.619933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.619968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.619988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.620028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.620065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.620084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.620100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.620135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.630442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.630611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.630650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.630670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.630718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.630756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.630774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.630825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.630864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.640632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.640785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.640821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.640841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.640880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.640934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.640956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.640972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.641008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.651580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.651743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.651780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.651799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.651839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.651877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.651895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.651929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.651971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.661702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.661859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.661896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.661930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.661974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.662011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.662030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.662046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.662082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.672527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.672737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.672774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.672794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.672834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.672871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.672888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.672904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.672960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.683327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.683509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.683545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.683564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.683604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.684545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.684584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.684605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.685250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.693861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.418 [2024-10-01 13:52:33.694025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.418 [2024-10-01 13:52:33.694060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.418 [2024-10-01 13:52:33.694079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.418 [2024-10-01 13:52:33.694118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.418 [2024-10-01 13:52:33.694156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.418 [2024-10-01 13:52:33.694173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.418 [2024-10-01 13:52:33.694189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.418 [2024-10-01 13:52:33.694225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.418 [2024-10-01 13:52:33.704430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.704781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.704812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.704844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.704875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.704906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.704958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.704975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.705035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.418 [2024-10-01 13:52:33.705067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.418 [2024-10-01 13:52:33.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.418 [2024-10-01 13:52:33.705372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.705387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.705418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.705459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.705491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.705523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.705554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.705585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.705981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.705996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.706654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.706979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.706996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.707012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.707042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.707074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.707105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.707145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.707177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.419 [2024-10-01 13:52:33.707209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.707241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.707272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.419 [2024-10-01 13:52:33.707303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.419 [2024-10-01 13:52:33.707320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.420 [2024-10-01 13:52:33.707335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.420 [2024-10-01 13:52:33.707367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.420 [2024-10-01 13:52:33.707411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.420 [2024-10-01 13:52:33.707442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.420 [2024-10-01 13:52:33.707474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.707970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.707991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.420 [2024-10-01 13:52:33.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d0020 is same with the state(6) to be set 00:18:34.420 [2024-10-01 13:52:33.708304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53584 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53912 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53920 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53928 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53936 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53944 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53952 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53960 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53968 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53976 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53984 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.708948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53992 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.708963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.708978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.708989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.709000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54000 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.709014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.709029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.709039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.709050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54008 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.709065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.420 [2024-10-01 13:52:33.709079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.420 [2024-10-01 13:52:33.709090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.420 [2024-10-01 13:52:33.709101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54016 len:8 PRP1 0x0 PRP2 0x0 00:18:34.420 [2024-10-01 13:52:33.709115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.421 [2024-10-01 13:52:33.709129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.421 [2024-10-01 13:52:33.709140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.421 [2024-10-01 13:52:33.709150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54024 len:8 PRP1 0x0 PRP2 0x0 00:18:34.421 [2024-10-01 13:52:33.709164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.421 [2024-10-01 13:52:33.709179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.421 [2024-10-01 13:52:33.709189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.421 [2024-10-01 13:52:33.709200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54032 len:8 PRP1 0x0 PRP2 0x0 00:18:34.421 [2024-10-01 13:52:33.709214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.421 [2024-10-01 13:52:33.709310] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9d0020 was disconnected and freed. reset controller. 00:18:34.421 [2024-10-01 13:52:33.709415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.421 [2024-10-01 13:52:33.709443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.421 [2024-10-01 13:52:33.709474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.421 [2024-10-01 13:52:33.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.421 [2024-10-01 13:52:33.709507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.421 [2024-10-01 13:52:33.709521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.421 [2024-10-01 13:52:33.709536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.421 [2024-10-01 13:52:33.709557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.421 [2024-10-01 13:52:33.709572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.710711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.710762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.711006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.711278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.711312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.421 [2024-10-01 13:52:33.711331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.711391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.711415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.421 [2024-10-01 13:52:33.711432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.711465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.711489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.711516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.711534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.711550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.711567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.711583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.711596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.711628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.711647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.721165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.721224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.722097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.722158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.421 [2024-10-01 13:52:33.722180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.722235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.722260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.421 [2024-10-01 13:52:33.722276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.722501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.722535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.722625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.722651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.722668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.722686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.722701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.722716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.722769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.722793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.731561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.731624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.731743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.731776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.421 [2024-10-01 13:52:33.731795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.731846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.731870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.421 [2024-10-01 13:52:33.731887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.731936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.731965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.731992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.732010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.732025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.732042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.732057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.732093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.732127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.732145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.742152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.742223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.742338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.742371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.421 [2024-10-01 13:52:33.742399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.742452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.742476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.421 [2024-10-01 13:52:33.742492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.742524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.742564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.742594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.742612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.742628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.742646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.742660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.742674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.742707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.742726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.752698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.752763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.752871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.752903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.421 [2024-10-01 13:52:33.752939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.752994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.753020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.421 [2024-10-01 13:52:33.753035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.753069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.753128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.753158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.753176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.753192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.753210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.753224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.753238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.754467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.754505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.763834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.763906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.764061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.764097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.421 [2024-10-01 13:52:33.764116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.764169] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.764193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.421 [2024-10-01 13:52:33.764210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.764247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.764272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.421 [2024-10-01 13:52:33.764299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.764317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.764334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.764352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.421 [2024-10-01 13:52:33.764367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.421 [2024-10-01 13:52:33.764381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.421 [2024-10-01 13:52:33.764412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.764431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.421 [2024-10-01 13:52:33.775941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.776016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.421 [2024-10-01 13:52:33.776191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.421 [2024-10-01 13:52:33.776227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.421 [2024-10-01 13:52:33.776285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.421 [2024-10-01 13:52:33.776342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.776368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.422 [2024-10-01 13:52:33.776385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.776424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.776449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.776494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.776515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.776533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.776551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.776567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.776580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.776612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.776630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.786406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.786476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.786605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.786640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.422 [2024-10-01 13:52:33.786659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.786711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.786735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.422 [2024-10-01 13:52:33.786762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.786798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.786823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.786849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.786866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.786882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.786898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.786929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.786946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.788212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.788251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.797539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.797606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.797729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.797762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.422 [2024-10-01 13:52:33.797782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.797834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.797859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.422 [2024-10-01 13:52:33.797875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.797928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.797956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.797985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.798003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.798019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.798037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.798052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.798066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.798097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.798115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.809515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.809598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.809784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.809820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.422 [2024-10-01 13:52:33.809841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.809893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.809931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.422 [2024-10-01 13:52:33.809952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.809989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.810014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.810104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.810128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.810145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.810164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.810179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.810193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.810233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.810253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.820017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.820100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.820220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.820265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.422 [2024-10-01 13:52:33.820284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.820335] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.820359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.422 [2024-10-01 13:52:33.820375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.820411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.820435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.820462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.820479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.820495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.820513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.820528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.422 [2024-10-01 13:52:33.820542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.422 [2024-10-01 13:52:33.821776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.821816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.422 [2024-10-01 13:52:33.831222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.831290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.422 [2024-10-01 13:52:33.831411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.831445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.422 [2024-10-01 13:52:33.831464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.831561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.422 [2024-10-01 13:52:33.831587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.422 [2024-10-01 13:52:33.831605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.422 [2024-10-01 13:52:33.831640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.831665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.422 [2024-10-01 13:52:33.831697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.422 [2024-10-01 13:52:33.831715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.831732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.831749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.831765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.831778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.831809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.831827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.843146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.843218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.843376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.843410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.843429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.843489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.843514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.843531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.843566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.843591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.843618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.843637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.843652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.843670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.843685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.843699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.843748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.843799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.853453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.853521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.853638] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.853671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.853690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.853741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.853773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.853789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.853823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.853847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.853874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.853892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.853907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.853942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.853958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.853971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.855233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.855272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.864599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.864661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.864792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.864825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.864844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.864895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.864935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.864956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.864993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.865017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 8111.20 IOPS, 31.68 MiB/s [2024-10-01 13:52:33.866933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.867000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.867021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.867040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.867056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.867069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.867236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.867262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.876447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.876510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.876674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.876708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.876727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.876778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.876802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.876819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.876855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.876880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.876907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.876943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.876959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.876977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.876992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.877006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.877038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.877056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.886906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.886983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.887097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.887130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.887149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.887200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.887257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.887276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.887311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.887345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.887371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.887388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.887403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.887419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.887434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.887447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.888678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.888718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.898191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.898249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.898372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.898404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.898424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.898476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.898499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.898515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.898569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.898597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.898625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.898642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.898658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.898676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.898691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.898704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.898738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.898757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.910356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.910434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.910620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.910659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.910678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.910730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.910755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.910771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.910807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.910832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.910878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.910901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.910946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.910967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.910983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.910997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.911029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.911047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.920860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.920947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.921070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.921103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.921122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.921175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.921199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.921216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.921250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.921274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.921301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.921319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.921380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.921399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.921414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.423 [2024-10-01 13:52:33.921427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.423 [2024-10-01 13:52:33.922681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.922721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.423 [2024-10-01 13:52:33.931899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.931974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.423 [2024-10-01 13:52:33.932093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.932126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.423 [2024-10-01 13:52:33.932145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.932196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.423 [2024-10-01 13:52:33.932221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.423 [2024-10-01 13:52:33.932251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.423 [2024-10-01 13:52:33.932287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.932311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.423 [2024-10-01 13:52:33.932338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.423 [2024-10-01 13:52:33.932356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.932373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.932391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.932406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.932420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.932457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.932475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.942062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.942162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.942268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.942300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:33.942318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.943618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.943665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:33.943731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.943754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.944641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.944684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.944704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.944721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.944858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.944884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.944900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.944930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.944984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.953339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.953421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.953541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.953574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:33.953593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.953645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.953670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:33.953686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.954664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.954712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.954957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.954986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.955012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.955031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.955047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.955061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.955141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.955163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.964223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.964314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.965344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.965392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:33.965415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.965470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.965495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:33.965512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.966149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.966194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.966306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.966333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.966349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.966368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.966383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.966397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.966429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.966448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.974594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.974660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.974776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.974808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:33.974826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.974884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.974909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:33.974943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.974978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.975003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.975029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.975047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.975062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.975105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.975122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.975136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.975167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.975185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.984748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.984858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.984975] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.985014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:33.985033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.985109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.985137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:33.985154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.985174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.985208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.985228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.985242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.985257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.985288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.985306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.985321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.985335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.986623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.995022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.995078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:33.995186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.995219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:33.995237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.995289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:33.995314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:33.995331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:33.995402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.995428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:33.995468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.995488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.995503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.995521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:33.995537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:33.995551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:33.995599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:33.995621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:34.005395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:34.005472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:34.005582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:34.005614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:34.005632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:34.005683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:34.005708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:34.005725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:34.005774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:34.005807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:34.005835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:34.005852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:34.005868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:34.005885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:34.005901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:34.005929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:34.007175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:34.007216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.424 [2024-10-01 13:52:34.016633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:34.016692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.424 [2024-10-01 13:52:34.016839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:34.016873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.424 [2024-10-01 13:52:34.016892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:34.016967] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.424 [2024-10-01 13:52:34.016993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.424 [2024-10-01 13:52:34.017009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.424 [2024-10-01 13:52:34.017043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:34.017068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.424 [2024-10-01 13:52:34.017095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.424 [2024-10-01 13:52:34.017112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.424 [2024-10-01 13:52:34.017127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.424 [2024-10-01 13:52:34.017145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.017160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.017178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.017208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.017226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.028691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.028755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.028900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.028950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.028971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.029028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.029053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.029069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.029105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.029130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.029157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.029175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.029191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.029208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.029224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.029268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.029303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.029322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.039184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.039248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.039361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.039394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.039412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.039465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.039488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.039505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.039541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.039564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.039591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.039609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.039624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.039642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.039657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.039671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.040894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.040944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.050329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.050391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.050505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.050549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.050571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.050627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.050652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.050669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.050704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.050761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.050791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.050809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.050824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.050842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.050857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.050871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.050900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.050934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.062287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.062361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.062519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.062569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.062590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.062647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.062672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.062689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.062725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.062750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.062777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.062795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.062812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.062831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.062846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.062860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.062927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.062961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.072764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.072827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.072954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.072987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.073042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.073099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.073134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.073150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.073186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.073211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.073238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.073256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.073271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.073288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.073304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.073318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.074554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.074592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.083994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.084057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.084170] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.084207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.084226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.084289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.084313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.084330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.084364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.084387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.084415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.084433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.084449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.084467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.084482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.084522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.084557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.084575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.096049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.096118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.096263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.096297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.096316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.096367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.096392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.096409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.096443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.096467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.096507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.096527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.096544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.096561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.096577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.096591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.096636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.096660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.106620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.106678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.106792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.106825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.106843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.106895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.106934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.106955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.106991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.107015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.107085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.107105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.107121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.107139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.107154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.107167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.108388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.108427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.425 [2024-10-01 13:52:34.117855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.117926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.425 [2024-10-01 13:52:34.118039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.118071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.425 [2024-10-01 13:52:34.118091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.118141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.425 [2024-10-01 13:52:34.118165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.425 [2024-10-01 13:52:34.118181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.425 [2024-10-01 13:52:34.118216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.118240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.425 [2024-10-01 13:52:34.118268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.118286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.118302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.118319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.425 [2024-10-01 13:52:34.118334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.425 [2024-10-01 13:52:34.118347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.425 [2024-10-01 13:52:34.118378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.118396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.129892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.129968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.130114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.130148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.130167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.130258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.130284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.130300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.130340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.130365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.130391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.130409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.130424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.130442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.130457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.130471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.130522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.130556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.140431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.140495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.140607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.140639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.140658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.140711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.140735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.140752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.140787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.140811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.140838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.140856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.140871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.140890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.140905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.140939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.142163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.142223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.151622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.151686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.151803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.151836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.151855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.151908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.151949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.151966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.152001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.152025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.152051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.152068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.152084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.152102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.152117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.152130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.152161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.152180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.163645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.163730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.163896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.163945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.163966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.164020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.164045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.164062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.164098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.164123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.164150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.164207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.164224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.164243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.164258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.164272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.164323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.164345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.174236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.174303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.174415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.174448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.174467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.174518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.174556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.174576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.174611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.174636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.174663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.174680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.174697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.174724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.174739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.174753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.176001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.176039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.185486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.185552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.185679] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.185712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.185731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.185783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.185839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.185857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.185894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.185938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.185969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.185987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.186003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.186021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.186036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.186050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.186091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.186110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.197646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.197708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.197868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.197902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.197947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.198004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.198029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.198046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.198116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.198142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.198182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.198203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.198219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.198237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.198252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.198266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.198315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.198336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.208552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.208620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.208733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.208776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.426 [2024-10-01 13:52:34.208795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.208846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.208871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.208887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.208935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.208963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.426 [2024-10-01 13:52:34.208991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.209008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.209024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.209042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.426 [2024-10-01 13:52:34.209057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.426 [2024-10-01 13:52:34.209070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.426 [2024-10-01 13:52:34.210292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.210330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.426 [2024-10-01 13:52:34.219763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.219828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.426 [2024-10-01 13:52:34.219958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.219992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.426 [2024-10-01 13:52:34.220011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.426 [2024-10-01 13:52:34.220064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.426 [2024-10-01 13:52:34.220088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.220105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.220140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.220165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.220191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.220209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.220256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.220275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.220290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.220304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.220336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.220354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.231695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.231764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.231945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.231980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.231999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.232051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.232076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.232092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.232129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.232153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.232180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.232198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.232214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.232232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.232247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.232262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.232292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.232311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.242149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.242210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.242318] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.242350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.242369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.242420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.242444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.242496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.242532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.242571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.242601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.242619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.242634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.242651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.242666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.242679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.243897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.243948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.253446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.253509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.253622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.253654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.253674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.253725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.253749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.253765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.253800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.253824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.253851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.253869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.253884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.253902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.253936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.253952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.253984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.254003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.265524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.265627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.265798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.265833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.265852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.265903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.265957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.265975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.266011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.266035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.266063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.266081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.266096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.266114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.266129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.266143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.266173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.266192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.276030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.276088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.276194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.276226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.276244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.276295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.276319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.276336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.276369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.276393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.276420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.276438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.276453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.276495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.276513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.276526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.277754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.277793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.287181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.287237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.287346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.287379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.287398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.287449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.287473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.287489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.287523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.287547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.287573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.287590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.287605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.287622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.287637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.287651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.287680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.287699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.299237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.299323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.299481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.299516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.299535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.299587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.299612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.299629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.299701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.299728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.299756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.299773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.299789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.299807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.299823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.299837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.299867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.299885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.309778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.309848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.309979] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.310013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.310033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.310084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.310120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.310137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.310172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.310197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.310223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.310241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.310257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.310275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.427 [2024-10-01 13:52:34.310290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.427 [2024-10-01 13:52:34.310305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.427 [2024-10-01 13:52:34.311547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.311585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.427 [2024-10-01 13:52:34.321033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.321097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.427 [2024-10-01 13:52:34.321249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.321284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.427 [2024-10-01 13:52:34.321303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.321354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.427 [2024-10-01 13:52:34.321379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.427 [2024-10-01 13:52:34.321399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.427 [2024-10-01 13:52:34.321433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.427 [2024-10-01 13:52:34.321457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.321484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.321501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.321516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.321533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.321548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.321562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.321592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.321610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.333267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.333345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.333505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.333540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.333560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.333611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.333636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.333653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.333688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.333713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.333746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.333764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.333780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.333798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.333843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.333859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.333891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.333958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.343805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.343865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.343992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.344026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.344046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.344097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.344121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.344137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.344172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.344196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.344222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.344240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.344255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.344273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.344288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.344302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.345527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.345565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.355042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.355103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.355216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.355249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.355268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.355319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.355343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.355359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.355394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.355453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.355483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.355501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.355517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.355534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.355549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.355563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.355594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.355612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.367116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.367189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.367343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.367376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.367395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.367447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.367471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.367487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.367523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.367547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.367575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.367593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.367608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.367627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.367642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.367656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.367687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.367706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.377506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.377568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.377676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.377707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.377758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.377815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.377840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.377857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.377891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.377933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.377966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.377983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.378002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.378020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.378034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.378048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.379282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.379322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.388765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.388825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.388950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.388983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.389002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.389061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.389085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.389101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.389137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.389161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.389187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.389205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.389220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.389237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.389252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.389298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.389334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.389353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.400902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.400990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.401147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.401183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.401202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.401254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.401279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.401296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.401332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.401356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.401382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.401401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.401417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.401435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.401450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.401464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.401495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.401513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.411505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.411557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.411675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.411718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.411736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.411789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.411813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.411829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.411862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.411886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.411971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.411992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.412006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.412023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.412039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.412053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.428 [2024-10-01 13:52:34.413251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.413289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.428 [2024-10-01 13:52:34.422939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.422989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.428 [2024-10-01 13:52:34.423088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.423126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.428 [2024-10-01 13:52:34.423143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.423193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.428 [2024-10-01 13:52:34.423218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.428 [2024-10-01 13:52:34.423233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.428 [2024-10-01 13:52:34.423265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.423289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.428 [2024-10-01 13:52:34.423316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.428 [2024-10-01 13:52:34.423333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.428 [2024-10-01 13:52:34.423347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.423363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.423379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.423392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.423421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.423439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.433071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.433128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.434374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.434427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.434448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.434525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.434566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.434584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.435444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.435490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.435636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.435663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.435678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.435697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.435712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.435725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.435757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.435775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.444442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.444491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.444598] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.444629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.444647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.444696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.444720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.444736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.444769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.444792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.444818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.444836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.444850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.444867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.444881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.444894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.444947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.444983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.454592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.454641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.454737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.454769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.454786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.454837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.454860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.454876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.454909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.454952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.455726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.455764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.455783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.455801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.455816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.455830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.456038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.456064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.465286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.465335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.465434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.465466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.465483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.465539] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.465563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.465579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.466335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.466378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.466613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.466660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.466677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.466695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.466709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.466724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.466764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.466785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.475417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.475465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.475561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.475592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.475609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.475658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.475682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.475698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.475973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.476007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.476154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.476186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.476201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.476219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.476234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.476247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.476356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.476376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.487033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.487145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.487339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.487376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.487396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.487449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.487508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.487527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.487566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.487591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.487637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.487659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.487677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.487696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.487711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.487725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.487756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.487778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.497797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.497892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.498043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.498078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.498098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.498152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.498176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.498193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.498229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.498254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.498282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.498300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.498318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.498336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.498351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.498365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.499642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.499687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.509001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.509101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.509252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.509287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.429 [2024-10-01 13:52:34.509307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.509373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.429 [2024-10-01 13:52:34.509397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.429 [2024-10-01 13:52:34.509414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.429 [2024-10-01 13:52:34.509450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.509475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.429 [2024-10-01 13:52:34.509502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.509520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.509537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.509555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.429 [2024-10-01 13:52:34.509571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.429 [2024-10-01 13:52:34.509585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.429 [2024-10-01 13:52:34.509616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.509636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.429 [2024-10-01 13:52:34.521055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.521170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.429 [2024-10-01 13:52:34.521333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.521370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.521390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.521443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.521467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.521483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.521521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.521546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.521595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.521617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.521677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.521698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.521713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.521727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.521759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.521778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.531541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.531644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.531797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.531832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.531853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.531905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.531946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.531964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.532001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.532026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.533290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.533330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.533352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.533373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.533389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.533403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.533601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.533628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.542931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.543030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.543184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.543221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.543241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.543304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.543328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.543381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.543420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.543446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.543472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.543490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.543507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.543525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.543541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.543554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.543585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.543604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.555079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.555154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.555319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.555355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.555375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.555428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.555453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.555469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.555506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.555532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.555578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.555600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.555617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.555635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.555651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.555664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.555696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.555714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.565682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.565794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.565939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.565974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.565994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.566048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.566072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.566088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.566125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.566150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.566183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.566201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.566217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.566235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.566250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.566263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.567541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.567580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.577021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.577118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.577266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.577312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.577332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.577387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.577412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.577436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.577473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.577499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.577526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.577544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.577561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.577613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.577631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.577645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.577678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.577698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.589256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.589367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.589575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.589613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.589634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.589688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.589713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.589729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.589768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.589794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.589840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.589862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.589881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.589905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.589939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.589955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.589990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.590009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.599733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.599831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.600004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.600041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.600061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.600114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.600138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.600155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.601463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.601511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.601711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.601737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.601755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.601774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.601790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.601804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.602588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.602626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.611014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.611111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.611273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.611309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.430 [2024-10-01 13:52:34.611329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.611382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.611407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.611424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.611470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.611495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.430 [2024-10-01 13:52:34.611523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.611541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.611559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.611577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.430 [2024-10-01 13:52:34.611592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.430 [2024-10-01 13:52:34.611607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.430 [2024-10-01 13:52:34.611638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.611657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.430 [2024-10-01 13:52:34.623081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.623172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.430 [2024-10-01 13:52:34.623404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.430 [2024-10-01 13:52:34.623440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.430 [2024-10-01 13:52:34.623460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.430 [2024-10-01 13:52:34.623513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.623536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.623552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.623589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.623613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.623661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.623684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.623701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.623720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.623735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.623749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.623780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.623799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.633575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.633629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.633732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.633773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.633791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.633841] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.633865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.633881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.633928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.633956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.633984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.634002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.634017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.634035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.634073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.634089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.634121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.634140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.644738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.644792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.644898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.644944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.644963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.645016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.645041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.645057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.645091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.645114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.645140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.645157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.645172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.645189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.645204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.645217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.645247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.645265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.656752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.656863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.657076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.657113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.657135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.657188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.657212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.657229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.657266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.657328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.657379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.657401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.657419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.657438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.657453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.657467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.657499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.657528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.667224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.667277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.667383] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.667416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.667434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.667485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.667509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.667525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.667559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.667583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.667610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.667627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.667642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.667660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.667675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.667688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.667718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.667737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.678378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.678429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.678530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.678575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.678624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.678680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.678705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.678721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.678756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.678780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.678806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.678823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.678837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.678855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.678869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.678882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.678930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.678951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.690398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.690455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.690609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.690642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.690660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.690711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.690735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.690760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.690794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.690818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.690845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.690862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.690884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.690902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.690936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.690975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.691024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.691045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.701019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.701134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.701277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.701313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.701334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.701387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.701412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.701428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.702722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.702770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.702993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.703022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.703040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.703060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.703076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.703090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.703886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.703934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.431 [2024-10-01 13:52:34.712548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.712623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.431 [2024-10-01 13:52:34.712792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.712837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.431 [2024-10-01 13:52:34.712856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.712908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.431 [2024-10-01 13:52:34.712932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.431 [2024-10-01 13:52:34.712948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.431 [2024-10-01 13:52:34.712998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.713023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.431 [2024-10-01 13:52:34.713086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.431 [2024-10-01 13:52:34.713106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.431 [2024-10-01 13:52:34.713121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.431 [2024-10-01 13:52:34.713139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.713154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.713167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.713198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.713216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.722725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.722815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.722948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.722981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.722999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.724294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.724344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.724365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.724387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.725259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.725311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.725331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.725347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.725469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.725495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.725510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.725525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.725577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.733958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.734013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.734139] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.734172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.734229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.734286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.734311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.734327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.734362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.734395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.734422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.734440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.734455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.734473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.734488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.734503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.735490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.735534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.744093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.744170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.744262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.744294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.744312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.744381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.744408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.744425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.744444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.745254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.745296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.745315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.745331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.745529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.745555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.745570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.745609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.746636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.755061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.755110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.755208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.755247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.755273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.755324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.755348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.755364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.756139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.756183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.756381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.756407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.756422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.756441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.756457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.756470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.756509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.756529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.765184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.765257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.765339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.765369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.765386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.765690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.765733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.765753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.765773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.765934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.765964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.766003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.766018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.766131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.766152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.766166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.766180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.766215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.776248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.776365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.776451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.776481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.776498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.776566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.776594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.776611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.776630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.776663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.776683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.776697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.776711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.776742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.776761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.776775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.776788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.776833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.786472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.786570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.786658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.786688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.786706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.786795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.786824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.786841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.786860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.786893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.786928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.786947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.786961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.788160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.788199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.788218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.788233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.789174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.797199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.797249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.797363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.797395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.797413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.797463] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.797486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.797502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.797535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.797558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.797584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.797601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.797615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.797632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.797646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.797660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.797689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.797707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.432 [2024-10-01 13:52:34.808725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.808823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.432 [2024-10-01 13:52:34.808971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.809008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.432 [2024-10-01 13:52:34.809027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.809081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.432 [2024-10-01 13:52:34.809114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.432 [2024-10-01 13:52:34.809131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.432 [2024-10-01 13:52:34.809168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.809193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.432 [2024-10-01 13:52:34.809220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.809238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.809255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.809274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.432 [2024-10-01 13:52:34.809289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.432 [2024-10-01 13:52:34.809303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.432 [2024-10-01 13:52:34.809333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.809352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.818947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.819082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.819211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.819247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.819266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.820578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.820623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.820646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.820669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.821577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.821621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.821642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.821703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.821985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.822021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.822037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.822052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.822085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.829697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.829793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.829973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.830011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.830034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.830087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.830111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.830127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.830164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.830190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.830217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.830236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.830253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.830271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.830287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.830301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.830332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.830352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.841572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.841681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.841845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.841882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.841903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.841977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.842003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.842060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.842101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.842127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.842177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.842200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.842217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.842237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.842252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.842266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.842297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.842316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.851815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.851897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.852038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.852072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.852093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.852144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.852169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.852186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.852226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.852252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.852279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.852296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.852313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.852333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.852349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.852363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.853593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.853631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.862970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.863050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.863156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.863188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.863206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.863256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.863280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.863296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.863330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.863353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.863380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.863397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.863411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.863429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.863444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.863457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.863487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.863505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 8018.00 IOPS, 31.32 MiB/s [2024-10-01 13:52:34.875222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.875324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.875535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.875573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.875593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.875646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.875671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.875688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.875725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.875752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.875779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.875797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.875815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.875870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.875888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.875902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.875956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.875978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.885785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.885905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.886072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.886108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.886128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.886182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.886206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.886223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.886260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.886285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.887570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.887612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.887634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.887654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.887670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.887684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.887890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.887934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.897089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.897187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.897339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.897374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.897395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.897448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.897473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.897520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.897561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.897587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.897614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.897632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.897649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.897667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.897683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.897696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.897733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.897752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.908995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.909109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.909345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.909385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.433 [2024-10-01 13:52:34.909406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.909460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.909484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.433 [2024-10-01 13:52:34.909501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.433 [2024-10-01 13:52:34.909540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.909566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.433 [2024-10-01 13:52:34.909616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.909639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.909657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.909677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.433 [2024-10-01 13:52:34.909693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.433 [2024-10-01 13:52:34.909707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.433 [2024-10-01 13:52:34.909739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.909758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.433 [2024-10-01 13:52:34.919650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.919705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.433 [2024-10-01 13:52:34.919849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.433 [2024-10-01 13:52:34.919883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.919901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.919971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.919997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.920014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.920050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.920074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.920101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.920119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.920134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.920152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.920166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.920180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.920211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.920230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.931241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.931320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.931446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.931481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.931501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.931554] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.931579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.931595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.931630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.931654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.931681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.931698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.931715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.931733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.931776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.931791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.931834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.931852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.943449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.943566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.943775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.943813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.943833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.943888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.943930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.943951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.943991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.944016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.944044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.944068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.944086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.944105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.944120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.944135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.944167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.944185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.954033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.954142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.954290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.954326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.954346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.954400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.954424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.954441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.955754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.955802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.956025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.956054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.956071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.956091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.956107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.956121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.956882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.956933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.965282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.965367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.965523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.965559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.965580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.965631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.965656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.965672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.965709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.965733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.965760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.965778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.965795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.965814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.965829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.965842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.965873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.965892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.977099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.977188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.977390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.977456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.977478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.977533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.977558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.977574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.977613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.977638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.977670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.977687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.977704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.977723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.977739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.977752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.977804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.977830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.987839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.987894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.988016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.988059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.988078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.988128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.988152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.988168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.988202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.988226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.988253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.988271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.988287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.988305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.988320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.988358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.989583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.989623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:34.999401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.999496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:34.999647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.999683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:34.999703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.999756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:34.999780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:34.999797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:34.999834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.999859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:34.999887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.999904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.999938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:34.999957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.434 [2024-10-01 13:52:34.999973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.434 [2024-10-01 13:52:34.999999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.434 [2024-10-01 13:52:35.000031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:35.000051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.434 [2024-10-01 13:52:35.009609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:35.010985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.434 [2024-10-01 13:52:35.011136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:35.011173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.434 [2024-10-01 13:52:35.011193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:35.012151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.434 [2024-10-01 13:52:35.012195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.434 [2024-10-01 13:52:35.012217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.434 [2024-10-01 13:52:35.012241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:35.012427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.434 [2024-10-01 13:52:35.012459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.012475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.012492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.012530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.012550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.012565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.012578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.012605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.021010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.021204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.021243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.021263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.021316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.021359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.021392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.021411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.021427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.021459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.021520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.021547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.021565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.022515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.022784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.022813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.022829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.022923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.032033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.032122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.033205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.033256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.033311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.033370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.033395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.033412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.034084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.034142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.034247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.034272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.034290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.034310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.034325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.034340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.034372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.034392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.042208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.042306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.042402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.042433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.042451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.042520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.042566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.042585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.042605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.042640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.042661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.042680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.042696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.042728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.042746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.042761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.042802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.042833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.053445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.053497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.053597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.053628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.053646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.053702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.053735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.053751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.053784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.053807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.053852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.053874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.053889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.053906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.053940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.053955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.054860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.054922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.063589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.063717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.063853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.063888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.063908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.064829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.064873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.064894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.064930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.065145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.065204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.065223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.065239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.066271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.066320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.066339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.066354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.067042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.074773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.074872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.075803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.075854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.075877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.075947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.075975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.075993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.076197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.076230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.076267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.076287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.076304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.076323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.076340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.076354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.076387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.076406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.084993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.085102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.085268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.085305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.085326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.085425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.085451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.085468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.085757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.085805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.085977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.086005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.086023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.086042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.086057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.086071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.086193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.086224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.096334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.096439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.096596] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.096633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.096654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.096708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.096733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.096750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.096796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.096821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.096848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.096867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.096884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.096902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.435 [2024-10-01 13:52:35.096937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.435 [2024-10-01 13:52:35.096953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.435 [2024-10-01 13:52:35.096988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.097036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.435 [2024-10-01 13:52:35.107864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.107966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.435 [2024-10-01 13:52:35.108832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.108882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.435 [2024-10-01 13:52:35.108906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.108981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.435 [2024-10-01 13:52:35.109007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.435 [2024-10-01 13:52:35.109024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.435 [2024-10-01 13:52:35.109216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.109249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.435 [2024-10-01 13:52:35.109285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.109306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.109322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.109341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.109356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.109371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.109403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.109422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.118046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.118122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.118207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.118238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.118256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.118324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.118351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.118368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.118388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.118420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.118441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.118456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.118504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.118551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.118581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.118596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.118611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.118639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.128159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.128353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.128390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.128419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.128472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.128515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.128548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.128567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.128583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.128614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.128675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.128702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.128721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.128766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.128797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.128815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.128829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.128859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.139270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.139691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.139743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.139766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.139816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.139857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.139988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.140018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.140036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.140053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.140067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.140083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.140120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.140143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.140173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.140190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.140204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.140231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.150279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.150388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.150535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.150592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.150613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.150669] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.150694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.150711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.151703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.151751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.152041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.152073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.152092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.152112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.152128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.152142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.153468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.153507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.161674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.162406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.162568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.162606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.162626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.162758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.162787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.162806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.162835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.162871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.162893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.162908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.162942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.162978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.162998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.163013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.163027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.163055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.173438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.173546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.173654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.173687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.173706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.173807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.173835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.173852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.173874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.173938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.173965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.173981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.174029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.174064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.174085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.174099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.174113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.174141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.184893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.184964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.185778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.185824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.185845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.185899] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.185939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.185958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.186143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.186174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.186230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.186254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.186270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.186288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.186304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.186318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.186349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.186368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.195073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.195215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.436 [2024-10-01 13:52:35.195347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.195382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.436 [2024-10-01 13:52:35.195401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.195472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.436 [2024-10-01 13:52:35.195499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.436 [2024-10-01 13:52:35.195550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.436 [2024-10-01 13:52:35.195575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.195612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.436 [2024-10-01 13:52:35.195633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.195648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.436 [2024-10-01 13:52:35.195665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.436 [2024-10-01 13:52:35.195697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.436 [2024-10-01 13:52:35.195717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.436 [2024-10-01 13:52:35.195731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.195745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.195773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.205248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.205489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.205527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.205548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.205601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.205644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.205677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.205695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.205713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.205745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.205806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.205833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.205851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.205883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.205930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.205952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.205975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.206005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.216379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.216489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.217462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.217513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.217536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.217591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.217616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.217632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.217839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.217872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.217924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.217948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.217965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.217984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.218000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.218014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.218047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.218067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.226628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.226760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.226880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.226927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.226949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.227022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.227049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.227067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.227088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.227123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.227144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.227159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.227176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.227207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.227266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.227283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.227297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.228593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.236788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.236972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.237009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.237029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.237082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.237145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.237180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.237198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.237214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.237245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.237305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.237332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.237349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.237381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.237412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.237430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.237444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.237473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.247799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.247862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.248674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.248720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.248742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.248801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.248826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.248843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.249075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.249109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.249145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.249164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.249180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.249198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.249213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.249226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.249257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.249275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.257973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.258108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.258238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.258273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.258293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.258363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.258391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.258408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.258430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.258465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.258486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.258502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.258519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.258567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.258590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.258605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.258620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.259953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.268141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.268359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.268396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.268460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.268517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.268560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.268593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.268611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.268628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.268659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.268722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.268750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.268767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.268799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.268831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.268848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.268863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.268892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.279688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.279800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.280133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.280172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.280193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.280246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.280271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.280287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.280370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.280400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.280430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.280449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.280466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.280486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.280501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.280546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.280580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.280599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.291049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.291157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.291308] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.291345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.437 [2024-10-01 13:52:35.291366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.291420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.437 [2024-10-01 13:52:35.291444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.437 [2024-10-01 13:52:35.291461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.437 [2024-10-01 13:52:35.292415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.292463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.437 [2024-10-01 13:52:35.292715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.292753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.292775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.292803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.437 [2024-10-01 13:52:35.292819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.437 [2024-10-01 13:52:35.292832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.437 [2024-10-01 13:52:35.294127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.294164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.437 [2024-10-01 13:52:35.302781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.437 [2024-10-01 13:52:35.302838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.303040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.303075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.303095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.303147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.303171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.303187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.303222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.303288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.303318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.303336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.303352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.303370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.303385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.303408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.303439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.303457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.314379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.314438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.314558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.314591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.314609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.314661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.314686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.314703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.314738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.314773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.314801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.314818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.314833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.314850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.314866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.314880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.314910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.314950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.324527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.325810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.325947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.325981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.326035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.326990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.327048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.327069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.327092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.327342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.327382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.327401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.327418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.327453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.327473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.327487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.327501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.327530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.334656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.334783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.334828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.334849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.334887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.334934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.334955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.334970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.335001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.335893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.336022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.336066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.336086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.336119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.336151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.336169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.336212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.336245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.345376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.345509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.345552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.345573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.345607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.345638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.345656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.345671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.345702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.345989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.346103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.346143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.346163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.346196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.346228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.346245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.346260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.346289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.356746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.356803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.357636] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.357682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.357704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.357757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.357782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.357798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.358026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.358065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.358141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.358162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.358177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.358195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.358210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.358223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.358254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.358272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.366888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.366978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.367069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.367100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.367118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.367184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.367211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.367228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.367247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.367280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.367300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.367314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.367328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.368558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.368597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.368616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.368630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.368860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.376993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.377119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.377164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.377186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.377232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.377302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.377337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.377354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.377368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.377398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.377458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.377484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.377500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.377532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.377562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.377579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.377593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.377622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.387414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.388224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.388330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.388371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.388390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.388631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.388671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.438 [2024-10-01 13:52:35.388691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.388711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.388759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.438 [2024-10-01 13:52:35.388782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.388797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.388811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.388844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.388862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.438 [2024-10-01 13:52:35.388877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.438 [2024-10-01 13:52:35.388891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.438 [2024-10-01 13:52:35.388957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.438 [2024-10-01 13:52:35.397520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.438 [2024-10-01 13:52:35.397643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.438 [2024-10-01 13:52:35.397689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.438 [2024-10-01 13:52:35.397709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.438 [2024-10-01 13:52:35.397743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.397781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.397798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.397813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.399085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.399379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.399488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.399528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.399548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.399581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.399612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.399629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.399643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.399673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.407613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.407736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.407777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.407798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.407831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.407862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.407880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.407895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.407941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.410419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.411141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.411187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.411237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.411352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.411403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.411424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.411439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.411470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.418002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.418839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.418892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.418925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.419132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.419192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.419214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.419228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.419268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.422012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.422126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.422167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.422187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.422221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.422252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.422269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.422284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.422315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.428096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.428216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.428248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.428265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.428299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.428331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.428372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.428388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.428421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.433189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.434042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.434088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.434109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.434295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.434354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.434376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.434391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.434423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.438189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.438319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.438361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.438381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.438414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.438445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.438473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.438488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.438518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.443279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.443393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.443426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.443444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.443477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.443517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.443534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.443549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.443579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.448550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.449402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.449447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.449468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.449669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.449730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.449752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.449767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.449798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.453386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.453510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.453550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.453571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.453604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.453634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.453652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.453666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.453697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.458644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.458757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.458790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.458808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.458840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.458872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.458889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.458903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.458962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.463816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.464644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.464689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.464710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.464936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.464997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.465019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.465034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.465066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.468733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.468860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.468902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.468938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.468974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.469006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.469023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.469037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.469067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.473925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.474037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.474068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.474086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.474127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.474158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.474176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.474190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.474220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.479074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.479190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.479232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.439 [2024-10-01 13:52:35.479253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.439 [2024-10-01 13:52:35.480000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.439 [2024-10-01 13:52:35.480220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.439 [2024-10-01 13:52:35.480257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.439 [2024-10-01 13:52:35.480295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.439 [2024-10-01 13:52:35.480340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.439 [2024-10-01 13:52:35.484032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.439 [2024-10-01 13:52:35.484152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.439 [2024-10-01 13:52:35.484193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.439 [2024-10-01 13:52:35.484213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.484246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.484277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.484294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.484309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.484340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.489166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.489281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.489312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.489330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.489363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.489394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.489411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.489425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.489455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.494349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.495191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.495235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.495256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.495441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.495500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.495522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.495537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.495570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.499254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.499404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.499446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.499467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.499500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.499531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.499548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.499562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.499592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.504441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.504556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.504601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.504621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.504655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.504686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.504703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.504717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.504748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.509575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.510421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.510467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.510488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.510699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.510761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.510783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.510797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.510828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.514528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.514665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.514705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.514725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.514758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.514811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.514830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.514845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.514875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.519663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.519778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.519818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.519839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.519871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.519902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.519934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.519950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.519982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.524894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.525727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.525773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.525794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.525993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.526051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.526074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.526088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.526120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.529750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.529887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.529939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.529961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.529994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.530025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.530042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.530056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.530105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.535002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.535117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.535150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.535168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.535201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.535232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.535249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.535263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.535293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.540067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.540901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.540961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.540983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.541182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.541243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.541264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.541278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.541310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.545101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.545227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.545268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.545288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.545322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.545354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.545371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.545386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.545416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.550157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.550312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.550356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.550400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.550438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.550470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.550487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.550501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.551784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.555221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.555336] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.555369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.555387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.556137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.556343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.556379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.556398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.556439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.560278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.560403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.560445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.560467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.560500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.560531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.560548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.560562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.560591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.565309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.565422] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.565468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.565489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.565522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.565553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.565589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.565605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.565637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.570613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.570729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.570772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.440 [2024-10-01 13:52:35.570792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.571539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.571759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.440 [2024-10-01 13:52:35.571801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.440 [2024-10-01 13:52:35.571819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.440 [2024-10-01 13:52:35.571860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.440 [2024-10-01 13:52:35.575401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.440 [2024-10-01 13:52:35.575525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.440 [2024-10-01 13:52:35.575558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.440 [2024-10-01 13:52:35.575575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.440 [2024-10-01 13:52:35.575609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.440 [2024-10-01 13:52:35.575654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.575674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.575689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.575719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.580698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.580815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.580846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.580863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.580896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.580948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.580968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.580982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.581012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.585986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.586114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.586145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.586163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.586935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.587140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.587177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.587195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.587236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.590791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.590935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.590977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.590997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.591030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.591061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.591078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.591093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.591123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.596075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.596189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.596221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.596239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.596272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.596303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.596320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.596336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.596366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.601541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.602381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.602426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.602447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.602698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.602750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.602771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.602785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.602817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.606167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.606315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.606356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.606377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.606410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.606460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.606482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.606497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.606527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.611632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.611746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.611778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.611795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.611828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.611859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.611875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.611890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.611937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.617230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.617352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.617393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.617413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.618162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.618376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.618413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.618449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.618492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.621727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.621856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.621889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.621907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.621957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.621989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.622007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.622021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.622051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.627324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.627439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.627471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.627489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.627522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.627553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.627570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.627584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.627615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.632710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.633545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.633590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.633611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.633795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.633858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.633880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.633894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.633940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.637414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.637570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.637612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.637633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.637667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.637698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.637716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.637731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.637761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.642804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.642942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.642997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.643017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.643055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.643087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.643104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.643118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.643149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.648109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.648938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.648983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.649004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.649213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.649273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.649295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.649309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.649341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.652930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.653049] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.653090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.653111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.653145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.653197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.653216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.653230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.653260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.658200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.658326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.658358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.441 [2024-10-01 13:52:35.658377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.658411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.441 [2024-10-01 13:52:35.658442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.441 [2024-10-01 13:52:35.658460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.441 [2024-10-01 13:52:35.658474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.441 [2024-10-01 13:52:35.658504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.441 [2024-10-01 13:52:35.669083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.441 [2024-10-01 13:52:35.671396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.441 [2024-10-01 13:52:35.671505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.441 [2024-10-01 13:52:35.671552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.441 [2024-10-01 13:52:35.673598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.673717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.674278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.674358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.674400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.674435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.674467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.674500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.676839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.676942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.678854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.678952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.678993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.679255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.680630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.681806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.681850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.681871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.682096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.683030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.683067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.683088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.683757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.684845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.685434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.685476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.685497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.686713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.687650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.687688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.687707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.688292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.691298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.691419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.691458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.691478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.691513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.691544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.691561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.691576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.691608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.694947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.695058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.695097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.695134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.695170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.695222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.695244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.695259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.696448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.701388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.701500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.701531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.701549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.701583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.701615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.701632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.701647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.701677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.705035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.705150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.705190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.705211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.705245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.705277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.705296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.705310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.705341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.711604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.711717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.711750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.711769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.712514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.712732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.712784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.712803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.712847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.715626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.715737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.715768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.715787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.715820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.715852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.715869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.715885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.715932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.721690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.721802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.721834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.721853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.721886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.721935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.721957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.721971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.723195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.726597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.726708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.726739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.726758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.727502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.727699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.727733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.727751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.727792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.731776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.731887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.731936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.731958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.731992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.732023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.732041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.732055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.732085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.736685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.736796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.736829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.736848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.736881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.736929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.736950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.736965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.736996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.742656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.742769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.742808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.742828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.742862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.742893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.742924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.742943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.742975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.746777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.746887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.746931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.746952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.747004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.747038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.747056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.747070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.747100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.753455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.753570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.753601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.753621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.753654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.753686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.753703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.753718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.753750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.757646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.757757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.757788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.442 [2024-10-01 13:52:35.757806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.757839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.757870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.757887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.442 [2024-10-01 13:52:35.757902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.442 [2024-10-01 13:52:35.757950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.442 [2024-10-01 13:52:35.765011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.442 [2024-10-01 13:52:35.765124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.442 [2024-10-01 13:52:35.765156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.442 [2024-10-01 13:52:35.765174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.442 [2024-10-01 13:52:35.765206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.442 [2024-10-01 13:52:35.765238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.442 [2024-10-01 13:52:35.765255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.765291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.765324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.768538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.768648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.768680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.768698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.768731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.768763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.768780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.768794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.768825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.775748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.775857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.775889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.775906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.775956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.776004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.776022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.776036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.776066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.780027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.780187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.780226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.780246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.780279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.780310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.780327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.780341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.780371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.786813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.787665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.787708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.787729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.787905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.787975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.787996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.788011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.788043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.790863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.790987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.791026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.791047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.791080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.791112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.791130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.791144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.791175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.796935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.797043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.797075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.797093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.797124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.797155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.797173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.797187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.798415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.801865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.801988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.802031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.802051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.802790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.803042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.803079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.803097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.803139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.807019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.807128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.807167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.807188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.807222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.807253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.807270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.807285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.807316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.811969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.812079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.812118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.812139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.812172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.812204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.812221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.812236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.812267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.817891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.818016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.818056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.818076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.818110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.818142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.818159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.818174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.818223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.822052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.822162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.822201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.822222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.822255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.822286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.822304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.822318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.822349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.828675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.828784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.828814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.828831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.828863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.828894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.828927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.828945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.828976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.832858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.832982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.833020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.833040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.833072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.833103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.833121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.833135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.833164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.840035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.840142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.840180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.840217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.840251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.840282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.840299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.840314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.840344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.843544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.843652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.843690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.843709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.843741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.843771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.843788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.843802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.843832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.850718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.850829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.850862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.850881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.850927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.850962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.850981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.850994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.851026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.855006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.855126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.855165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.443 [2024-10-01 13:52:35.855185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.855219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.855251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.855284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.443 [2024-10-01 13:52:35.855299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.443 [2024-10-01 13:52:35.855331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.443 [2024-10-01 13:52:35.861770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.443 [2024-10-01 13:52:35.861883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.443 [2024-10-01 13:52:35.861934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.443 [2024-10-01 13:52:35.861956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.443 [2024-10-01 13:52:35.862706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.443 [2024-10-01 13:52:35.862933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.443 [2024-10-01 13:52:35.862968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.862986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.863028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 8039.29 IOPS, 31.40 MiB/s [2024-10-01 13:52:35.867470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.868332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.868374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.868394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.868573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.869558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.869594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.869613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.870255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.871858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.871983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.872023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.872043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.872077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.872108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.872126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.872140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.872171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.877805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.877931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.877971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.877992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.878026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.878058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.878076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.878090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.878121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.881954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.882064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.882103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.882123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.882157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.882189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.882206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.882221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.882251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.888665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.888777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.888816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.888837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.888871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.888926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.888948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.888963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.888994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.892902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.893027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.893066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.893103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.893139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.893171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.893189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.893203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.893233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.900876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.901468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.901532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.901568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.901768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.901958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.902006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.902038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.902118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.904180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.905418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.905471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.905505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.905759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.905893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.905950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.905972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.907585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.911705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.911829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.911871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.911893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.911951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.911991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.912022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.912047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.912094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.916219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.916403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.916447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.916469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.916504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.916536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.916553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.916568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.916600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.922978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.923797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.923842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.923863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.924078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.924138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.924161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.924176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.924208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.927098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.927213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.927254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.927284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.927318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.927349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.927367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.927381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.927412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.933069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.933207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.933248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.933269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.933303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.933335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.933352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.933366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.933397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.938344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.938461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.938493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.938512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.939292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.939492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.939526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.939545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.939608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.943183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.943298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.943331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.943350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.943382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.943414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.943431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.943445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.943476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.948433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.948548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.948588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.444 [2024-10-01 13:52:35.948614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.948668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.948701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.948720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.444 [2024-10-01 13:52:35.948735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.444 [2024-10-01 13:52:35.948765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.444 [2024-10-01 13:52:35.953721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.444 [2024-10-01 13:52:35.953835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.444 [2024-10-01 13:52:35.953866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.444 [2024-10-01 13:52:35.953885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.444 [2024-10-01 13:52:35.954642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.444 [2024-10-01 13:52:35.954847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.444 [2024-10-01 13:52:35.954882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.954899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.954956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.958521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.958649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.958689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:35.958709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.958742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.958775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.958793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.958807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.958837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.963812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.963938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.963978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:35.963999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.964033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.964065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.964082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.964114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.964149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.969059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.969172] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.969206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:35.969225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.969969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.970166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.970201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.970219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.970260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.973902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.974023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.974054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:35.974073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.974105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.974137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.974154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.974169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.974199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.979154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.979264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.979295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:35.979314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.979346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.979377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.979394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.979408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.979438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.984376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.984501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.984560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:35.984583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.985334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.985547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.985582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.985600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.985663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.989242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.989358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.989390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:35.989409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.989442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.989475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.989493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.989507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.989538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.994473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.994597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.994637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:35.994657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:35.994691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:35.994722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:35.994739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:35.994754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:35.994784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:35.999701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:35.999814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:35.999852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:35.999872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.000617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.000834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.000868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.000886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.000941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.004559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.004673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.004712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:36.004732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.004765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.004808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.004825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.004839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.004870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.009793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.009906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.009955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:36.009974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.010008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.010039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.010057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.010071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.010101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.015105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.015947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.015989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:36.016009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.016186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.016242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.016264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.016279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.016310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.019886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.020013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.020062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:36.020082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.020115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.020146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.020164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.020177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.020207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.025191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.025303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.025341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:36.025361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.025394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.025426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.025444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.025459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.025490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.030278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.030391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.030424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:36.030442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.031201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.031401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.031435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.031454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.031514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.035280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.035390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.035429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:36.035471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.035507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.035556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.035578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.035593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.035624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.040367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.040478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.040511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.445 [2024-10-01 13:52:36.040530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.040563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.040594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.040611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.040626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.445 [2024-10-01 13:52:36.040655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.445 [2024-10-01 13:52:36.045530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.445 [2024-10-01 13:52:36.045644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.445 [2024-10-01 13:52:36.045686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.445 [2024-10-01 13:52:36.045707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.445 [2024-10-01 13:52:36.046453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.445 [2024-10-01 13:52:36.046670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.445 [2024-10-01 13:52:36.046706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.445 [2024-10-01 13:52:36.046724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.046766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.050453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.050574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.050614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.050635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.050668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.050700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.050735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.050757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.050789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.055622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.055744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.055785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.055806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.055846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.055878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.055895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.055923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.055961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.060765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.060880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.060931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.060954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.061683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.061900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.061949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.061967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.062009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.065737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.065847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.065880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.065898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.065948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.065983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.066001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.066016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.066047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.070853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.071000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.071032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.071051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.071083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.071114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.071132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.071146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.071177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.076107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.076222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.076255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.076274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.077018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.077216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.077251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.077269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.077310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.080972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.081081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.081120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.081140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.081173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.081205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.081223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.081237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.081267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.086195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.086306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.086338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.086356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.086407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.086453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.086470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.086485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.086515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.091408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.091520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.091559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.091580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.092325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.092522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.092556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.092574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.092615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.096279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.096391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.096430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.096451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.096484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.096516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.096534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.096548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.096579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.101501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.101611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.101649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.101669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.101701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.101732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.101750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.101783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.101817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.106499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.106623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.106708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.106733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.107480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.107678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.107713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.107731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.107792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.111587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.111696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.111728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.111746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.111778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.111826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.111848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.111863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.111894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.116599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.116710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.116742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.116761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.116798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.116829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.116847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.116861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.116891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.121676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.121788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.121848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.121869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.122629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.122828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.122863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.122881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.122936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.126685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.126810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.126849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.446 [2024-10-01 13:52:36.126871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.126905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.126954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.446 [2024-10-01 13:52:36.126973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.446 [2024-10-01 13:52:36.126987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.446 [2024-10-01 13:52:36.127019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.446 [2024-10-01 13:52:36.131769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.446 [2024-10-01 13:52:36.131890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.446 [2024-10-01 13:52:36.131945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.446 [2024-10-01 13:52:36.131967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.446 [2024-10-01 13:52:36.132002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.446 [2024-10-01 13:52:36.132034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.132052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.132066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.132097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.137180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.447 [2024-10-01 13:52:36.137300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.447 [2024-10-01 13:52:36.137341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.447 [2024-10-01 13:52:36.137362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.447 [2024-10-01 13:52:36.138140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.447 [2024-10-01 13:52:36.138392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.138428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.138447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.138489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.141862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.447 [2024-10-01 13:52:36.141987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.447 [2024-10-01 13:52:36.142026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.447 [2024-10-01 13:52:36.142046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.447 [2024-10-01 13:52:36.142080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.447 [2024-10-01 13:52:36.142111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.142133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.142148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.142179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.147276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.447 [2024-10-01 13:52:36.147386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.447 [2024-10-01 13:52:36.147419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.447 [2024-10-01 13:52:36.147437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.447 [2024-10-01 13:52:36.147469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.447 [2024-10-01 13:52:36.147500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.147517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.147533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.147563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.152605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.447 [2024-10-01 13:52:36.152718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.447 [2024-10-01 13:52:36.152751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.447 [2024-10-01 13:52:36.152769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.447 [2024-10-01 13:52:36.153514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.447 [2024-10-01 13:52:36.153715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.153749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.153767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.153808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.157362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.447 [2024-10-01 13:52:36.157478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.447 [2024-10-01 13:52:36.157516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.447 [2024-10-01 13:52:36.157536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.447 [2024-10-01 13:52:36.157569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.447 [2024-10-01 13:52:36.157600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.157618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.157632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.157663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.162695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.447 [2024-10-01 13:52:36.162815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.447 [2024-10-01 13:52:36.162847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.447 [2024-10-01 13:52:36.162865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.447 [2024-10-01 13:52:36.162897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.447 [2024-10-01 13:52:36.162955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.162975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.162989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.163021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.167903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.447 [2024-10-01 13:52:36.168029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.447 [2024-10-01 13:52:36.168069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.447 [2024-10-01 13:52:36.168089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.447 [2024-10-01 13:52:36.168827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.447 [2024-10-01 13:52:36.169038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.447 [2024-10-01 13:52:36.169072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.447 [2024-10-01 13:52:36.169101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.447 [2024-10-01 13:52:36.169143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.447 [2024-10-01 13:52:36.172789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.172900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.172958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.456 [2024-10-01 13:52:36.172997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.173034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.173066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.173083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.173098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.173128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.178005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.178123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.178155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.456 [2024-10-01 13:52:36.178173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.178206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.178237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.178255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.178269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.178300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.183248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.183362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.183404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.456 [2024-10-01 13:52:36.183425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.184169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.184367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.184403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.184422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.184483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.188099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.188210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.188249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.456 [2024-10-01 13:52:36.188270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.188302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.188334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.188374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.188391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.188422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.193338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.193451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.193490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.456 [2024-10-01 13:52:36.193511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.193544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.193576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.193593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.193608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.193638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.198501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.198622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.198656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.456 [2024-10-01 13:52:36.198674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.199426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.199624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.199659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.199677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.199719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.203429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.203538] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.203587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.456 [2024-10-01 13:52:36.203607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.203641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.203672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.203689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.203704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.203734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.208596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.208727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.208760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.456 [2024-10-01 13:52:36.208778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.208811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.456 [2024-10-01 13:52:36.208842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.456 [2024-10-01 13:52:36.208860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.456 [2024-10-01 13:52:36.208874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.456 [2024-10-01 13:52:36.208906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.456 [2024-10-01 13:52:36.213783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.456 [2024-10-01 13:52:36.213895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.456 [2024-10-01 13:52:36.213946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.456 [2024-10-01 13:52:36.213967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.456 [2024-10-01 13:52:36.214706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.214926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.214959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.214976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.215018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.218698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.218808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.218839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.457 [2024-10-01 13:52:36.218857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.218890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.218937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.218957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.218972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.219012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.223869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.223997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.224037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.457 [2024-10-01 13:52:36.224057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.224110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.224143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.224160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.224175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.224206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.229138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.229252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.229292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.457 [2024-10-01 13:52:36.229312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.230065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.230273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.230308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.230326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.230386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.233972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.234082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.234114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.457 [2024-10-01 13:52:36.234132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.234165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.234196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.234213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.234228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.234258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.239231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.239344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.239383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.457 [2024-10-01 13:52:36.239402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.239435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.239466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.239483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.239518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.239551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.244536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.244664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.244702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.457 [2024-10-01 13:52:36.244722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.245478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.245682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.245716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.245734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.245775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.249321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.249431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.249463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.457 [2024-10-01 13:52:36.249481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.249513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.249544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.249562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.249576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.249606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.254630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.254741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.254780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.457 [2024-10-01 13:52:36.254801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.254834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.254866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.254883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.254898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.254980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.259899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.260027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.260078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.457 [2024-10-01 13:52:36.260098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.260831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.261045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.261080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.261098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.457 [2024-10-01 13:52:36.261139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.457 [2024-10-01 13:52:36.264720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.457 [2024-10-01 13:52:36.264837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.457 [2024-10-01 13:52:36.264870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.457 [2024-10-01 13:52:36.264892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.457 [2024-10-01 13:52:36.264940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.457 [2024-10-01 13:52:36.264975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.457 [2024-10-01 13:52:36.264992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.457 [2024-10-01 13:52:36.265006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.265037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.270006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.270118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.270150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.458 [2024-10-01 13:52:36.270168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.270200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.270231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.270249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.270264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.270295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.275099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.275213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.275246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.458 [2024-10-01 13:52:36.275265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.276016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.276232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.276266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.276283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.276343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.280098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.280213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.280245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.458 [2024-10-01 13:52:36.280264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.280296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.280345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.280367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.280382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.280413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.285190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.285301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.285333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.458 [2024-10-01 13:52:36.285352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.285385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.285415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.285433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.285448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.285478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.290345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.290457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.290497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.458 [2024-10-01 13:52:36.290516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.291272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.291471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.291506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.291524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.291588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.295278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.295389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.295421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.458 [2024-10-01 13:52:36.295440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.295473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.295504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.295522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.295536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.295566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.300433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.300545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.300586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.458 [2024-10-01 13:52:36.300606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.300639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.300671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.300688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.300703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.300733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.305587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.305701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.305746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.458 [2024-10-01 13:52:36.305767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.306514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.306746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.306782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.306800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.306840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.310523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.310667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.310699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.458 [2024-10-01 13:52:36.310752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.310788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.458 [2024-10-01 13:52:36.310821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.458 [2024-10-01 13:52:36.310839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.458 [2024-10-01 13:52:36.310854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.458 [2024-10-01 13:52:36.310884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.458 [2024-10-01 13:52:36.315677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.458 [2024-10-01 13:52:36.315794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.458 [2024-10-01 13:52:36.315826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.458 [2024-10-01 13:52:36.315844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.458 [2024-10-01 13:52:36.315876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.459 [2024-10-01 13:52:36.315907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.459 [2024-10-01 13:52:36.315945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.459 [2024-10-01 13:52:36.315961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.459 [2024-10-01 13:52:36.315992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.459 [2024-10-01 13:52:36.320957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.459 [2024-10-01 13:52:36.321075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.459 [2024-10-01 13:52:36.321116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.459 [2024-10-01 13:52:36.321137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.459 [2024-10-01 13:52:36.321887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.459 [2024-10-01 13:52:36.322106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.459 [2024-10-01 13:52:36.322141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.459 [2024-10-01 13:52:36.322160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.459 [2024-10-01 13:52:36.322201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.459 [2024-10-01 13:52:36.325774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.459 [2024-10-01 13:52:36.325886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.459 [2024-10-01 13:52:36.325932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.459 [2024-10-01 13:52:36.325953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.459 [2024-10-01 13:52:36.325988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.459 [2024-10-01 13:52:36.326020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.459 [2024-10-01 13:52:36.326070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.459 [2024-10-01 13:52:36.326087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.459 [2024-10-01 13:52:36.326119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.459 [2024-10-01 13:52:36.331053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.459 [2024-10-01 13:52:36.331173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.459 [2024-10-01 13:52:36.331213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.459 [2024-10-01 13:52:36.331234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.459 [2024-10-01 13:52:36.331268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.459 [2024-10-01 13:52:36.331300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.459 [2024-10-01 13:52:36.331318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.459 [2024-10-01 13:52:36.331334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.459 [2024-10-01 13:52:36.331364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.459 [2024-10-01 13:52:36.336372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.459 [2024-10-01 13:52:36.337225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.459 [2024-10-01 13:52:36.337268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.459 [2024-10-01 13:52:36.337289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.459 [2024-10-01 13:52:36.337468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.459 [2024-10-01 13:52:36.337525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.459 [2024-10-01 13:52:36.337546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.459 [2024-10-01 13:52:36.337561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.459 [2024-10-01 13:52:36.337592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.459 [2024-10-01 13:52:36.341144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.459 [2024-10-01 13:52:36.341257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.459 [2024-10-01 13:52:36.341290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.459 [2024-10-01 13:52:36.341309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.459 [2024-10-01 13:52:36.341343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.459 [2024-10-01 13:52:36.341374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.459 [2024-10-01 13:52:36.341392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.459 [2024-10-01 13:52:36.341407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.459 [2024-10-01 13:52:36.341438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.459 [2024-10-01 13:52:36.346466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.459 [2024-10-01 13:52:36.346626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.459 [2024-10-01 13:52:36.346658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.459 [2024-10-01 13:52:36.346676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.459 [2024-10-01 13:52:36.346709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.459 [2024-10-01 13:52:36.346742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.459 [2024-10-01 13:52:36.346759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.459 [2024-10-01 13:52:36.346773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.459 [2024-10-01 13:52:36.346805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.459 [2024-10-01 13:52:36.351729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.459 [2024-10-01 13:52:36.352602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.459 [2024-10-01 13:52:36.352647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.459 [2024-10-01 13:52:36.352669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.459 [2024-10-01 13:52:36.352853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.460 [2024-10-01 13:52:36.352944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.460 [2024-10-01 13:52:36.352971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.460 [2024-10-01 13:52:36.352988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.460 [2024-10-01 13:52:36.353021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.460 [2024-10-01 13:52:36.356597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.460 [2024-10-01 13:52:36.356715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.460 [2024-10-01 13:52:36.356754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.460 [2024-10-01 13:52:36.356774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.460 [2024-10-01 13:52:36.356808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.460 [2024-10-01 13:52:36.356840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.460 [2024-10-01 13:52:36.356858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.460 [2024-10-01 13:52:36.356873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.460 [2024-10-01 13:52:36.356904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.460 [2024-10-01 13:52:36.361833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.460 [2024-10-01 13:52:36.361978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.460 [2024-10-01 13:52:36.362011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.460 [2024-10-01 13:52:36.362031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.460 [2024-10-01 13:52:36.362097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.460 [2024-10-01 13:52:36.362130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.460 [2024-10-01 13:52:36.362149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.460 [2024-10-01 13:52:36.362164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.460 [2024-10-01 13:52:36.362196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.460 [2024-10-01 13:52:36.367103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.460 [2024-10-01 13:52:36.367950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.460 [2024-10-01 13:52:36.367992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.460 [2024-10-01 13:52:36.368013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.460 [2024-10-01 13:52:36.368197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.460 [2024-10-01 13:52:36.368254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.460 [2024-10-01 13:52:36.368275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.460 [2024-10-01 13:52:36.368290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.460 [2024-10-01 13:52:36.368322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.460 [2024-10-01 13:52:36.371948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.460 [2024-10-01 13:52:36.372063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.460 [2024-10-01 13:52:36.372096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.460 [2024-10-01 13:52:36.372115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.460 [2024-10-01 13:52:36.372148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.460 [2024-10-01 13:52:36.372180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.460 [2024-10-01 13:52:36.372197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.460 [2024-10-01 13:52:36.372212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.460 [2024-10-01 13:52:36.372243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.460 [2024-10-01 13:52:36.377192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.460 [2024-10-01 13:52:36.377307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.460 [2024-10-01 13:52:36.377347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.460 [2024-10-01 13:52:36.377367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.460 [2024-10-01 13:52:36.377401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.460 [2024-10-01 13:52:36.377433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.460 [2024-10-01 13:52:36.377450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.460 [2024-10-01 13:52:36.377490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.460 [2024-10-01 13:52:36.377523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.460 [2024-10-01 13:52:36.382410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.460 [2024-10-01 13:52:36.383294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.460 [2024-10-01 13:52:36.383339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.461 [2024-10-01 13:52:36.383360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.461 [2024-10-01 13:52:36.383558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.461 [2024-10-01 13:52:36.383616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.461 [2024-10-01 13:52:36.383638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.461 [2024-10-01 13:52:36.383654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.461 [2024-10-01 13:52:36.383687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.461 [2024-10-01 13:52:36.387296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.461 [2024-10-01 13:52:36.387415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.461 [2024-10-01 13:52:36.387446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.461 [2024-10-01 13:52:36.387465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.461 [2024-10-01 13:52:36.387497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.461 [2024-10-01 13:52:36.387529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.461 [2024-10-01 13:52:36.387546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.461 [2024-10-01 13:52:36.387561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.461 [2024-10-01 13:52:36.387592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.461 [2024-10-01 13:52:36.392506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.461 [2024-10-01 13:52:36.392622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.461 [2024-10-01 13:52:36.392662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.461 [2024-10-01 13:52:36.392682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.461 [2024-10-01 13:52:36.392716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.461 [2024-10-01 13:52:36.392748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.461 [2024-10-01 13:52:36.392766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.461 [2024-10-01 13:52:36.392781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.461 [2024-10-01 13:52:36.392812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.461 [2024-10-01 13:52:36.397644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.461 [2024-10-01 13:52:36.397760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.461 [2024-10-01 13:52:36.397829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.461 [2024-10-01 13:52:36.397851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.461 [2024-10-01 13:52:36.398622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.461 [2024-10-01 13:52:36.398831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.461 [2024-10-01 13:52:36.398866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.461 [2024-10-01 13:52:36.398884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.461 [2024-10-01 13:52:36.398939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.461 [2024-10-01 13:52:36.402594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.461 [2024-10-01 13:52:36.402707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.461 [2024-10-01 13:52:36.402739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.461 [2024-10-01 13:52:36.402757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.461 [2024-10-01 13:52:36.402789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.461 [2024-10-01 13:52:36.402820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.461 [2024-10-01 13:52:36.402838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.461 [2024-10-01 13:52:36.402853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.461 [2024-10-01 13:52:36.402883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.461 [2024-10-01 13:52:36.407734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.461 [2024-10-01 13:52:36.407848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.461 [2024-10-01 13:52:36.407886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.461 [2024-10-01 13:52:36.407906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.461 [2024-10-01 13:52:36.407956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.461 [2024-10-01 13:52:36.407988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.461 [2024-10-01 13:52:36.408007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.461 [2024-10-01 13:52:36.408022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.461 [2024-10-01 13:52:36.408053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.461 [2024-10-01 13:52:36.412967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.461 [2024-10-01 13:52:36.413803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.461 [2024-10-01 13:52:36.413847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.461 [2024-10-01 13:52:36.413869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.461 [2024-10-01 13:52:36.414074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.461 [2024-10-01 13:52:36.414166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.461 [2024-10-01 13:52:36.414190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.461 [2024-10-01 13:52:36.414205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.461 [2024-10-01 13:52:36.414238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.461 [2024-10-01 13:52:36.417822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.461 [2024-10-01 13:52:36.417958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.461 [2024-10-01 13:52:36.417997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.462 [2024-10-01 13:52:36.418017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.462 [2024-10-01 13:52:36.418053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.462 [2024-10-01 13:52:36.418085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.462 [2024-10-01 13:52:36.418102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.462 [2024-10-01 13:52:36.418117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.462 [2024-10-01 13:52:36.418148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.462 [2024-10-01 13:52:36.423066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.462 [2024-10-01 13:52:36.423199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.462 [2024-10-01 13:52:36.423239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.462 [2024-10-01 13:52:36.423260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.462 [2024-10-01 13:52:36.423294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.462 [2024-10-01 13:52:36.423325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.462 [2024-10-01 13:52:36.423344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.462 [2024-10-01 13:52:36.423360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.462 [2024-10-01 13:52:36.423391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.462 [2024-10-01 13:52:36.428390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.462 [2024-10-01 13:52:36.429272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.462 [2024-10-01 13:52:36.429318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.462 [2024-10-01 13:52:36.429340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.462 [2024-10-01 13:52:36.429528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.462 [2024-10-01 13:52:36.429586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.462 [2024-10-01 13:52:36.429609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.462 [2024-10-01 13:52:36.429625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.462 [2024-10-01 13:52:36.429693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.462 [2024-10-01 13:52:36.433163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.462 [2024-10-01 13:52:36.433280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.462 [2024-10-01 13:52:36.433319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.462 [2024-10-01 13:52:36.433339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.462 [2024-10-01 13:52:36.433373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.462 [2024-10-01 13:52:36.433404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.462 [2024-10-01 13:52:36.433422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.462 [2024-10-01 13:52:36.433437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.462 [2024-10-01 13:52:36.433468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.462 [2024-10-01 13:52:36.438503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.462 [2024-10-01 13:52:36.438644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.462 [2024-10-01 13:52:36.438676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.462 [2024-10-01 13:52:36.438695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.462 [2024-10-01 13:52:36.438729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.462 [2024-10-01 13:52:36.438761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.462 [2024-10-01 13:52:36.438779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.462 [2024-10-01 13:52:36.438794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.462 [2024-10-01 13:52:36.438825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.462 [2024-10-01 13:52:36.443813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.462 [2024-10-01 13:52:36.444692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.462 [2024-10-01 13:52:36.444737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.462 [2024-10-01 13:52:36.444758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.462 [2024-10-01 13:52:36.444957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.462 [2024-10-01 13:52:36.445014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.462 [2024-10-01 13:52:36.445036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.462 [2024-10-01 13:52:36.445055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.462 [2024-10-01 13:52:36.445088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.462 [2024-10-01 13:52:36.448614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.462 [2024-10-01 13:52:36.448724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.462 [2024-10-01 13:52:36.448765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.462 [2024-10-01 13:52:36.448819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.462 [2024-10-01 13:52:36.448856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.462 [2024-10-01 13:52:36.448888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.462 [2024-10-01 13:52:36.448905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.462 [2024-10-01 13:52:36.448936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.462 [2024-10-01 13:52:36.448970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.462 [2024-10-01 13:52:36.453926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.454041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.454080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.463 [2024-10-01 13:52:36.454100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.463 [2024-10-01 13:52:36.454133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.463 [2024-10-01 13:52:36.454165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.463 [2024-10-01 13:52:36.454183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.463 [2024-10-01 13:52:36.454197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.463 [2024-10-01 13:52:36.454228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.463 [2024-10-01 13:52:36.459124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.459239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.459278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.463 [2024-10-01 13:52:36.459298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.463 [2024-10-01 13:52:36.460063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.463 [2024-10-01 13:52:36.460269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.463 [2024-10-01 13:52:36.460303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.463 [2024-10-01 13:52:36.460321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.463 [2024-10-01 13:52:36.460384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.463 [2024-10-01 13:52:36.464017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.464129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.464163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.463 [2024-10-01 13:52:36.464181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.463 [2024-10-01 13:52:36.464214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.463 [2024-10-01 13:52:36.464246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.463 [2024-10-01 13:52:36.464293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.463 [2024-10-01 13:52:36.464309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.463 [2024-10-01 13:52:36.464341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.463 [2024-10-01 13:52:36.469216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.469333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.469366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.463 [2024-10-01 13:52:36.469384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.463 [2024-10-01 13:52:36.469417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.463 [2024-10-01 13:52:36.469449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.463 [2024-10-01 13:52:36.469467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.463 [2024-10-01 13:52:36.469482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.463 [2024-10-01 13:52:36.469522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.463 [2024-10-01 13:52:36.474409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.474524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.474576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.463 [2024-10-01 13:52:36.474597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.463 [2024-10-01 13:52:36.475349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.463 [2024-10-01 13:52:36.475555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.463 [2024-10-01 13:52:36.475590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.463 [2024-10-01 13:52:36.475608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.463 [2024-10-01 13:52:36.475649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.463 [2024-10-01 13:52:36.479315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.479424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.479465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.463 [2024-10-01 13:52:36.479485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.463 [2024-10-01 13:52:36.479518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.463 [2024-10-01 13:52:36.479549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.463 [2024-10-01 13:52:36.479566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.463 [2024-10-01 13:52:36.479581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.463 [2024-10-01 13:52:36.479612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.463 [2024-10-01 13:52:36.484502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.484639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.484679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.463 [2024-10-01 13:52:36.484699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.463 [2024-10-01 13:52:36.484733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.463 [2024-10-01 13:52:36.484765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.463 [2024-10-01 13:52:36.484782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.463 [2024-10-01 13:52:36.484797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.463 [2024-10-01 13:52:36.484828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.463 [2024-10-01 13:52:36.489573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.463 [2024-10-01 13:52:36.489692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.463 [2024-10-01 13:52:36.489732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.464 [2024-10-01 13:52:36.489752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.490497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.490725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.464 [2024-10-01 13:52:36.490761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.464 [2024-10-01 13:52:36.490778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.464 [2024-10-01 13:52:36.490820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.464 [2024-10-01 13:52:36.494612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.464 [2024-10-01 13:52:36.494731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.464 [2024-10-01 13:52:36.494764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.464 [2024-10-01 13:52:36.494783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.494815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.494847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.464 [2024-10-01 13:52:36.494864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.464 [2024-10-01 13:52:36.494879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.464 [2024-10-01 13:52:36.494909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.464 [2024-10-01 13:52:36.499667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.464 [2024-10-01 13:52:36.499778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.464 [2024-10-01 13:52:36.499809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.464 [2024-10-01 13:52:36.499828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.499881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.499931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.464 [2024-10-01 13:52:36.499953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.464 [2024-10-01 13:52:36.499968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.464 [2024-10-01 13:52:36.500000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.464 [2024-10-01 13:52:36.504789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.464 [2024-10-01 13:52:36.504900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.464 [2024-10-01 13:52:36.504962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.464 [2024-10-01 13:52:36.504983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.505713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.505908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.464 [2024-10-01 13:52:36.505954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.464 [2024-10-01 13:52:36.505971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.464 [2024-10-01 13:52:36.506012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.464 [2024-10-01 13:52:36.509759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.464 [2024-10-01 13:52:36.509870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.464 [2024-10-01 13:52:36.509908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.464 [2024-10-01 13:52:36.509942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.509976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.510007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.464 [2024-10-01 13:52:36.510025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.464 [2024-10-01 13:52:36.510040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.464 [2024-10-01 13:52:36.510071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.464 [2024-10-01 13:52:36.514876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.464 [2024-10-01 13:52:36.514999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.464 [2024-10-01 13:52:36.515041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.464 [2024-10-01 13:52:36.515061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.515094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.515125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.464 [2024-10-01 13:52:36.515142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.464 [2024-10-01 13:52:36.515174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.464 [2024-10-01 13:52:36.515208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.464 [2024-10-01 13:52:36.520039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.464 [2024-10-01 13:52:36.520152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.464 [2024-10-01 13:52:36.520191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.464 [2024-10-01 13:52:36.520212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.520954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.521151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.464 [2024-10-01 13:52:36.521186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.464 [2024-10-01 13:52:36.521203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.464 [2024-10-01 13:52:36.521246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.464 [2024-10-01 13:52:36.524979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.464 [2024-10-01 13:52:36.525089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.464 [2024-10-01 13:52:36.525121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.464 [2024-10-01 13:52:36.525139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.464 [2024-10-01 13:52:36.525173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.464 [2024-10-01 13:52:36.525204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.525222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.525236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.525266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.530132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.530244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.530275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.465 [2024-10-01 13:52:36.530293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.530326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.530357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.530375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.530389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.530420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.535333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.535447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.535513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.465 [2024-10-01 13:52:36.535536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.536294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.536499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.536534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.536552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.536615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.540219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.540329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.540368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.465 [2024-10-01 13:52:36.540388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.540422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.540453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.540471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.540486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.540516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.545419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.545532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.545574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.465 [2024-10-01 13:52:36.545594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.545628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.545659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.545677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.545692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.545723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.550670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.550785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.550823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.465 [2024-10-01 13:52:36.550844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.551602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.551844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.551879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.551897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.551956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.555508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.555619] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.555658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.465 [2024-10-01 13:52:36.555679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.555713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.555744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.555762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.555777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.555808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.560762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.560876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.560908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.465 [2024-10-01 13:52:36.560944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.560978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.561010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.561027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.561041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.561072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.566054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.566897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.566952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.465 [2024-10-01 13:52:36.566974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.567159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.567215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.567238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.567255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.567319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.570851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.465 [2024-10-01 13:52:36.570977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.465 [2024-10-01 13:52:36.571016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.465 [2024-10-01 13:52:36.571037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.465 [2024-10-01 13:52:36.571070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.465 [2024-10-01 13:52:36.571101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.465 [2024-10-01 13:52:36.571118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.465 [2024-10-01 13:52:36.571133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.465 [2024-10-01 13:52:36.571164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.465 [2024-10-01 13:52:36.576148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.576259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.576298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.466 [2024-10-01 13:52:36.576319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.576352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.576383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.576401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.576415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.576446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.581233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.581344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.581387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.466 [2024-10-01 13:52:36.581408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.582154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.582358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.582393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.582410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.582452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.586232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.586343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.586374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.466 [2024-10-01 13:52:36.586413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.586449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.586480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.586510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.586525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.586602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.591321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.591448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.591479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.466 [2024-10-01 13:52:36.591496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.591528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.591558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.591575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.591589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.591634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.596726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.596837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.596876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.466 [2024-10-01 13:52:36.596896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.597637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.597836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.597872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.597890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.597945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.601407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.601526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.601558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.466 [2024-10-01 13:52:36.601576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.601609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.601642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.601677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.601692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.601723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.606812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.606939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.606971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.466 [2024-10-01 13:52:36.606990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.607023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.607055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.607072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.607087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.607117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.612003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.612133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.612165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.466 [2024-10-01 13:52:36.612184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.612949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.613177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.613213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.613232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.613274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.616906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.617056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.617089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.466 [2024-10-01 13:52:36.617111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.466 [2024-10-01 13:52:36.617161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.466 [2024-10-01 13:52:36.617197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.466 [2024-10-01 13:52:36.617215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.466 [2024-10-01 13:52:36.617230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.466 [2024-10-01 13:52:36.617261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.466 [2024-10-01 13:52:36.622108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.466 [2024-10-01 13:52:36.622266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.466 [2024-10-01 13:52:36.622300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.467 [2024-10-01 13:52:36.622319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.622352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.622384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.622402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.622416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.622447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.627372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.628220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.628266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.467 [2024-10-01 13:52:36.628288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.628469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.628527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.628549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.628564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.628597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.632239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.632356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.632389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.467 [2024-10-01 13:52:36.632408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.632440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.632472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.632490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.632505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.632535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.637464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.637582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.637616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.467 [2024-10-01 13:52:36.637635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.637688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.637721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.637739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.637754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.637785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.642572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.642689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.642721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.467 [2024-10-01 13:52:36.642740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.643490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.643714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.643750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.643768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.643811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.647556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.647667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.647699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.467 [2024-10-01 13:52:36.647717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.647750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.647781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.647799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.647814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.647844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.652660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.652774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.652806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.467 [2024-10-01 13:52:36.652824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.652857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.652889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.652907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.652956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.652992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.657804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.657932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.657966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.467 [2024-10-01 13:52:36.657985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.658739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.658967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.659001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.659019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.659060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.662752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.662872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.662903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.467 [2024-10-01 13:52:36.662937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.662972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.663004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.663021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.663036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.663066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.667889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.668016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.668048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.467 [2024-10-01 13:52:36.668066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.668098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.467 [2024-10-01 13:52:36.668130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.467 [2024-10-01 13:52:36.668148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.467 [2024-10-01 13:52:36.668162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.467 [2024-10-01 13:52:36.668194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.467 [2024-10-01 13:52:36.673139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.467 [2024-10-01 13:52:36.673253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.467 [2024-10-01 13:52:36.673303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.467 [2024-10-01 13:52:36.673323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.467 [2024-10-01 13:52:36.674068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.674292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.674328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.674346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.674388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.677989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.678098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.678129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.468 [2024-10-01 13:52:36.678148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.678180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.678226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.678247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.678262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.678293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.683229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.683341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.683372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.468 [2024-10-01 13:52:36.683390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.683422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.683454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.683472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.683487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.683517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.688443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.688555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.688588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.468 [2024-10-01 13:52:36.688606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.689362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.689581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.689616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.689634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.689674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.693320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.693429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.693461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.468 [2024-10-01 13:52:36.693479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.693511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.693542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.693560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.693574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.693604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.698535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.698660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.698691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.468 [2024-10-01 13:52:36.698709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.698742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.698772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.698790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.698804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.698834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.703759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.703871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.703902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.468 [2024-10-01 13:52:36.703937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.704667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.704863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.704897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.704928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.704991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.708638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.708748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.708781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.468 [2024-10-01 13:52:36.708799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.708843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.708876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.708893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.708908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.708957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.713846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.713972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.714004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.468 [2024-10-01 13:52:36.714022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.714056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.714088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.714105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.714119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.714150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.719051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.719164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.719201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.468 [2024-10-01 13:52:36.719221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.719966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.720163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.720198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.720216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.468 [2024-10-01 13:52:36.720276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.468 [2024-10-01 13:52:36.723951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.468 [2024-10-01 13:52:36.724061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.468 [2024-10-01 13:52:36.724092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.468 [2024-10-01 13:52:36.724134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.468 [2024-10-01 13:52:36.724172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.468 [2024-10-01 13:52:36.724221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.468 [2024-10-01 13:52:36.724243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.468 [2024-10-01 13:52:36.724258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.724289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.729159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.729331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.729389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.469 [2024-10-01 13:52:36.729424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.729483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.729548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.729582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.729609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.731230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.735728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.735902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.735979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.469 [2024-10-01 13:52:36.736015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.736071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.736122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.736152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.736176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.736225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.739290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.739467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.739528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.469 [2024-10-01 13:52:36.739566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.739620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.739670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.739733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.739764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.739816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.746712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.747946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.748010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.469 [2024-10-01 13:52:36.748047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.748293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.748463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.748514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.748549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.750025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.751393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.751573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.751637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.469 [2024-10-01 13:52:36.751674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.753133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.754223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.754284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.754319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.754471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.758585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.758758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.758820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.469 [2024-10-01 13:52:36.758856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.759212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.759455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.759511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.759543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.759689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.761897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.762086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.762146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.469 [2024-10-01 13:52:36.762181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.763320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.763650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.763708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.763742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.763892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.769625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.769813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.769878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.469 [2024-10-01 13:52:36.769934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.770002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.770056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.770087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.770114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.770184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.774135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.774623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.774686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.469 [2024-10-01 13:52:36.774723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.774948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.775119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.775166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.775197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.775260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.781873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.782063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.469 [2024-10-01 13:52:36.782123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.469 [2024-10-01 13:52:36.782158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.469 [2024-10-01 13:52:36.782260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.469 [2024-10-01 13:52:36.782317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.469 [2024-10-01 13:52:36.782350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.469 [2024-10-01 13:52:36.782376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.469 [2024-10-01 13:52:36.783896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.469 [2024-10-01 13:52:36.785168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.469 [2024-10-01 13:52:36.785334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.785393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.470 [2024-10-01 13:52:36.785429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.785484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.785536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.785568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.785595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.785645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.792747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.793995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.794059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.470 [2024-10-01 13:52:36.794097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.794374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.794563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.794613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.794645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.796149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.797543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.797725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.797784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.470 [2024-10-01 13:52:36.797819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.799331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.800448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.800508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.800566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.800734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.804714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.804877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.804948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.470 [2024-10-01 13:52:36.804986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.805330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.805547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.805599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.805630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.805777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.808031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.809260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.809330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.470 [2024-10-01 13:52:36.809368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.809621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.809791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.809840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.809873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.811385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.815505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.815667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.815726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.470 [2024-10-01 13:52:36.815761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.815816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.815867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.815899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.815947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.816001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.819932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.820113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.820197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.470 [2024-10-01 13:52:36.820233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.820583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.820831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.820884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.820931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.821094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.827526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.827689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.827745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.470 [2024-10-01 13:52:36.827778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.827830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.827877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.827906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.827951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.828002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.831555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.831717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.831776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.470 [2024-10-01 13:52:36.831810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.470 [2024-10-01 13:52:36.831865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.470 [2024-10-01 13:52:36.831934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.470 [2024-10-01 13:52:36.831967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.470 [2024-10-01 13:52:36.831993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.470 [2024-10-01 13:52:36.832965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.470 [2024-10-01 13:52:36.839219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.470 [2024-10-01 13:52:36.839523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.470 [2024-10-01 13:52:36.839569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.470 [2024-10-01 13:52:36.839590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.839670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.840896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.840945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.840964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.841806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.842074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.842187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.842227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.471 [2024-10-01 13:52:36.842247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.842282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.842313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.842331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.842345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.842376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.849321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.849437] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.849469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.471 [2024-10-01 13:52:36.849488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.849521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.849553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.849570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.849584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.849617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.852480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.852593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.852632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.471 [2024-10-01 13:52:36.852652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.852686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.852718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.852736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.852750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.853688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.859570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.859694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.859729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.471 [2024-10-01 13:52:36.859747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.859781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.859813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.859831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.859846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.859876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.863721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.863836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.863877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.471 [2024-10-01 13:52:36.863898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.863948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.863982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.864000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.864015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.864047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 8081.38 IOPS, 31.57 MiB/s [2024-10-01 13:52:36.872042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.873306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.873350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.471 [2024-10-01 13:52:36.873371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.874243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.874389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.874425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.874444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.874482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.874507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.874602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.874634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.471 [2024-10-01 13:52:36.874677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.874712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.874744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.874762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.874776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.874807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.883089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.883203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.883235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.471 [2024-10-01 13:52:36.883253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.883286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.883328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.883348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.883362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.883393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.885476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.885727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.885767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.471 [2024-10-01 13:52:36.885788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.885843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.885880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.471 [2024-10-01 13:52:36.885898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.471 [2024-10-01 13:52:36.885927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.471 [2024-10-01 13:52:36.885964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.471 [2024-10-01 13:52:36.893183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.471 [2024-10-01 13:52:36.893295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.471 [2024-10-01 13:52:36.893327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.471 [2024-10-01 13:52:36.893345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.471 [2024-10-01 13:52:36.893600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.471 [2024-10-01 13:52:36.893768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.893819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.893838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.893983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.896105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.896215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.896254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.472 [2024-10-01 13:52:36.896274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.896307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.896339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.896357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.896371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.896403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.903276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.903388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.903427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.472 [2024-10-01 13:52:36.903447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.903481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.903513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.903530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.903545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.903576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.907210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.907321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.907354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.472 [2024-10-01 13:52:36.907373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.907406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.907437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.907455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.907469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.907500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.913660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.914485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.914529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.472 [2024-10-01 13:52:36.914562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.914734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.914809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.914833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.914848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.914881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.917541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.917652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.917691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.472 [2024-10-01 13:52:36.917711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.917745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.917776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.917794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.917809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.917840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.925022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.925145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.925176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.472 [2024-10-01 13:52:36.925195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.925228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.925259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.925277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.925292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.925322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.928166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.928987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.929030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.472 [2024-10-01 13:52:36.929051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.929247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.929303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.929325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.929340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.929372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.936126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.936239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.936281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.472 [2024-10-01 13:52:36.936302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.936335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.936367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.936384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.936398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.936429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.939529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.939641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.939672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.472 [2024-10-01 13:52:36.939692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.939724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.939756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.939774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.939789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.472 [2024-10-01 13:52:36.940700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.472 [2024-10-01 13:52:36.946510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.472 [2024-10-01 13:52:36.946634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.472 [2024-10-01 13:52:36.946673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.472 [2024-10-01 13:52:36.946693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.472 [2024-10-01 13:52:36.946727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.472 [2024-10-01 13:52:36.946758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.472 [2024-10-01 13:52:36.946775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.472 [2024-10-01 13:52:36.946809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.946843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.950578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.950689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.950728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.473 [2024-10-01 13:52:36.950749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.950781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.950813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.950831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.950845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.950884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.957096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.957905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.957984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.473 [2024-10-01 13:52:36.958006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.958179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.958235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.958256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.958271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.958303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.960990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.961099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.961131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.473 [2024-10-01 13:52:36.961149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.961182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.961213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.961230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.961245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.961274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.968434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.968562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.968602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.473 [2024-10-01 13:52:36.968623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.968657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.968689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.968707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.968721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.969621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.971516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.972333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.972375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.473 [2024-10-01 13:52:36.972396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.972565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.972639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.972668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.972684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.972717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.979482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.979593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.979631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.473 [2024-10-01 13:52:36.979652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.979685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.979716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.979733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.979748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.979779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.982842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.982965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.983004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.473 [2024-10-01 13:52:36.983025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.983058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.983107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.983127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.983142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.983173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.989866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.989995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.990035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.473 [2024-10-01 13:52:36.990056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.990089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.990119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.990136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.990151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.990182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:36.993952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:36.994062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:36.994101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.473 [2024-10-01 13:52:36.994122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:36.994156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:36.994187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:36.994204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:36.994232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:36.994264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:37.000498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:37.001360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.473 [2024-10-01 13:52:37.001404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.473 [2024-10-01 13:52:37.001425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.473 [2024-10-01 13:52:37.001621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.473 [2024-10-01 13:52:37.001679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.473 [2024-10-01 13:52:37.001701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.473 [2024-10-01 13:52:37.001718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.473 [2024-10-01 13:52:37.001778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.473 [2024-10-01 13:52:37.004459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.473 [2024-10-01 13:52:37.004572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.004606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.474 [2024-10-01 13:52:37.004625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.004657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.004688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.004706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.004721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.004752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.012001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.012120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.012153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.474 [2024-10-01 13:52:37.012172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.012205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.012237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.012255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.012270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.012301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.015112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.015933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.015975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.474 [2024-10-01 13:52:37.015995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.016180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.016237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.016259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.016274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.016306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.023113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.023229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.023261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.474 [2024-10-01 13:52:37.023305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.023341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.023373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.023391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.023405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.023436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.026487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.026607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.026646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.474 [2024-10-01 13:52:37.026667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.026700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.026739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.026757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.026772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.027683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.033478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.033593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.033632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.474 [2024-10-01 13:52:37.033653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.033687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.033718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.033735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.033751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.033781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.037625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.037735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.037774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.474 [2024-10-01 13:52:37.037795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.037828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.037860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.037892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.037908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.037961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.044172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.044995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.045037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.474 [2024-10-01 13:52:37.045058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.045228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.045284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.045305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.045319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.045351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.048114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.048225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.048263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.474 [2024-10-01 13:52:37.048284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.048317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.048348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.048366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.048381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.048412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.055588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.055701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.055732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.474 [2024-10-01 13:52:37.055750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.055783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.474 [2024-10-01 13:52:37.055815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.474 [2024-10-01 13:52:37.055832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.474 [2024-10-01 13:52:37.055847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.474 [2024-10-01 13:52:37.055877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.474 [2024-10-01 13:52:37.058759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.474 [2024-10-01 13:52:37.059591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.474 [2024-10-01 13:52:37.059634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.474 [2024-10-01 13:52:37.059655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.474 [2024-10-01 13:52:37.059836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.059925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.059951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.059967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.060000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.066875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.067056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.067101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.067122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.067157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.067204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.067225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.067241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.067274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.068854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.070196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.070241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.475 [2024-10-01 13:52:37.070262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.070486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.070558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.070583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.070599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.070632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.077611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.077785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.077821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.077840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.077936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.077975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.077994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.078011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.078043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.078953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.089424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.089602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.089638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.089658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.089707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.089765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.089789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.089806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.089855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.096003] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:34.475 [2024-10-01 13:52:37.100843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.100987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.101041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.101062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.101103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.101140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.101159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.101174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.101210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.111726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.111964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.112006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.112028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.112146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.112414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.112449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.112468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.112537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.121852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.122001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.122035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.122053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.122091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.122126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.122143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.122158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.122193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.131984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.132110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.132144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.132162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.132201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.132237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.475 [2024-10-01 13:52:37.132254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.475 [2024-10-01 13:52:37.132269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.475 [2024-10-01 13:52:37.132304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.475 [2024-10-01 13:52:37.143303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.475 [2024-10-01 13:52:37.144358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.475 [2024-10-01 13:52:37.144403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.475 [2024-10-01 13:52:37.144426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.475 [2024-10-01 13:52:37.144577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.475 [2024-10-01 13:52:37.144659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.144689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.144707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.144771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.153406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.153534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.153567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.153586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.153623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.153658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.153676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.153691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.153726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.163517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.163649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.163685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.163704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.164441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.164572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.164606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.164624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.164662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.174910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.175083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.175118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.175137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.175177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.175213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.175233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.175249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.175286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.185396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.185584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.185619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.185673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.185717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.185754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.185774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.185791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.185827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.195535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.195715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.195758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.195780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.195821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.195858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.195876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.195893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.195946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.205675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.205807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.205840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.205859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.207384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.207575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.207609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.207627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.207716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.217478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.217597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.217629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.217647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.217685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.217721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.217770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.217786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.217822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.227575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.227702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.227734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.227752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.227789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.227824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.227841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.227855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.227889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.237679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.237795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.237827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.237845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.237883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.237933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.237955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.237970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.238005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.476 [2024-10-01 13:52:37.247784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.476 [2024-10-01 13:52:37.247900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.476 [2024-10-01 13:52:37.247950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.476 [2024-10-01 13:52:37.247969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.476 [2024-10-01 13:52:37.248007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.476 [2024-10-01 13:52:37.248041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.476 [2024-10-01 13:52:37.248058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.476 [2024-10-01 13:52:37.248073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.476 [2024-10-01 13:52:37.248107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.257880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.258009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.258042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.258060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.258096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.258134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.258152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.258166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.258201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.269429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.269546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.269578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.269596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.269632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.269668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.269685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.269700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.269735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.279531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.279648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.279680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.279699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.279735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.279782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.279799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.279813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.279848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.289629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.289747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.289779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.289797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.289855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.291097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.291135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.291153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.291946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.299807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.299937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.299970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.299988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.300025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.300061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.300078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.300092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.300127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.309908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.310035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.310067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.310085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.310121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.310377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.310412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.310429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.310504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.322081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.322933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.322977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.322998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.323104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.323147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.323165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.323198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.323247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.332355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.332472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.332505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.332524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.332560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.332595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.332612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.332627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.332662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.342553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.342668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.342699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.342717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.342753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.342789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.342806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.342821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.342855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.353091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.353207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.353239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.477 [2024-10-01 13:52:37.353257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.477 [2024-10-01 13:52:37.353294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.477 [2024-10-01 13:52:37.353340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.477 [2024-10-01 13:52:37.353360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.477 [2024-10-01 13:52:37.353374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.477 [2024-10-01 13:52:37.353408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.477 [2024-10-01 13:52:37.363191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.477 [2024-10-01 13:52:37.363306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.477 [2024-10-01 13:52:37.363354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.363374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.363412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.363686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.363722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.363741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.363817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.375251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.376098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.376141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.376163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.376265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.376308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.376326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.376341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.376379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.385594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.385734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.385766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.385784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.385821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.385856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.385874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.385888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.385938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.395946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.396072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.396105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.396123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.396160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.396223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.396243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.396257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.396292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.406702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.406819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.406851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.406869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.406907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.406964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.406983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.406998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.407032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.416802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.416934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.416966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.416985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.417023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.417058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.417075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.417089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.417125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.429147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.429991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.430035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.430056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.430158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.430203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.430221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.430236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.430292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.439570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.439731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.439766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.439789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.439829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.439865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.439883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.439899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.439952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.449933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.450070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.450103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.450121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.450160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.450196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.450215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.450230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.450266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.461008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.461124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.461156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.461174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.461212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.461247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.478 [2024-10-01 13:52:37.461265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.478 [2024-10-01 13:52:37.461279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.478 [2024-10-01 13:52:37.461316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.478 [2024-10-01 13:52:37.472385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.478 [2024-10-01 13:52:37.472507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.478 [2024-10-01 13:52:37.472539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.478 [2024-10-01 13:52:37.472589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.478 [2024-10-01 13:52:37.472629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.478 [2024-10-01 13:52:37.472665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.472683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.472697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.472732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.482850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.483016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.483048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.483067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.483106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.483141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.483159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.483174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.483209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.493964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.494165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.494207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.494228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.494267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.494304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.494322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.494348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.494384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.504252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.504391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.504424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.504443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.504482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.504518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.504567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.504583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.504619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.514372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.514514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.514561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.514583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.514622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.514677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.514700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.514715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.515978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.524493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.524643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.524679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.524699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.524738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.524775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.524792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.524808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.524844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.534606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.534768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.534802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.534822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.534860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.534896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.534930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.534949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.534987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.544734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.544886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.544935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.544957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.544999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.545035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.545053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.545068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.545103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.554869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.555027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.555061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.555080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.555678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.555882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.555926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.479 [2024-10-01 13:52:37.555948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.479 [2024-10-01 13:52:37.556067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.479 [2024-10-01 13:52:37.565049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.479 [2024-10-01 13:52:37.565212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.479 [2024-10-01 13:52:37.565274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.479 [2024-10-01 13:52:37.565342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.479 [2024-10-01 13:52:37.565618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.479 [2024-10-01 13:52:37.565719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.479 [2024-10-01 13:52:37.565744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.565762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.565798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.575182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.576577] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.576625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.576646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.576925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.577875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.577924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.577947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.578697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.586008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.586225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.586268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.586290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.586330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.586366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.586383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.586398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.586433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.596705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.596856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.596898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.596934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.596976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.597033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.597056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.597073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.597108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.606832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.606984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.607018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.607037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.607075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.607111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.607129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.607178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.607216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.617084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.617224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.617267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.617286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.617325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.617365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.617383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.617398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.617433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.628208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.628344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.628376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.628394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.628434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.628470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.628487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.628502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.628537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.638495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.638637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.638671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.638689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.638727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.638764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.638781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.638797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.638832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.648620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.648803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.648837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.648862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.648900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.648954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.648973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.648989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.650244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.658777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.658949] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.658996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.659017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.659057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.659094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.659112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.659128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.659856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.670528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.670670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.480 [2024-10-01 13:52:37.670703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.480 [2024-10-01 13:52:37.670722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.480 [2024-10-01 13:52:37.670776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.480 [2024-10-01 13:52:37.670817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.480 [2024-10-01 13:52:37.670835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.480 [2024-10-01 13:52:37.670850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.480 [2024-10-01 13:52:37.670885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.480 [2024-10-01 13:52:37.680644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.480 [2024-10-01 13:52:37.680762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.680794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.680812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.680848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.680927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.680950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.680965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.681002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.690826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.690985] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.691019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.691038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.691084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.691120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.691138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.691153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.691188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.701843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.702004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.702053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.702072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.702110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.702146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.702164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.702180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.702215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.712104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.712227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.712260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.712279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.712316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.712353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.712371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.712386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.712454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.722204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.722322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.722354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.722372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.723607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.723844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.723880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.723897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.724799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.732304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.733112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.733156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.733177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.733286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.733330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.733348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.733363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.733399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.743816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.744023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.744066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.744088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.744137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.744176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.744194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.744210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.744247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.753979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.754123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.754157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.754220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.754264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.754301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.754318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.754333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.754368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.764331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.764583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.764627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.764648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.764765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.764822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.764841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.764857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.764893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.775629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.775749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.775781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.775799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.481 [2024-10-01 13:52:37.775847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.481 [2024-10-01 13:52:37.775883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.481 [2024-10-01 13:52:37.775901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.481 [2024-10-01 13:52:37.775932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.481 [2024-10-01 13:52:37.775970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.481 [2024-10-01 13:52:37.786092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.481 [2024-10-01 13:52:37.786241] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.481 [2024-10-01 13:52:37.786282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.481 [2024-10-01 13:52:37.786302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.786339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.786375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.786427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.786444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.786481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.796215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.796337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.796369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.796387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.796425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.796461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.796479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.796493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.796528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.806318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.806434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.806466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.806484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.806523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.806577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.806596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.806611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.806646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.817789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.818640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.818683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.818704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.818817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.818862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.818881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.818897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.818955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.827894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.828565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.828608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.828628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.828793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.828953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.828985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.829003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.829048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.838005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.838163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.838196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.838214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.838252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.838288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.838305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.838319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.838354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.848140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.848256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.848288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.848306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.848344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.848380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.848397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.848412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.848446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.858245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.858363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.858396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.858414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.858476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.858513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.858531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.858560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.858596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 7992.89 IOPS, 31.22 MiB/s [2024-10-01 13:52:37.870975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.871175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.871208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.871226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.872453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.873549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.873588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.873610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.874348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.881589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.881716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.881749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.881767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.881804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.881840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.881858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.482 [2024-10-01 13:52:37.881872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.482 [2024-10-01 13:52:37.881907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.482 [2024-10-01 13:52:37.892639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.482 [2024-10-01 13:52:37.892993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.482 [2024-10-01 13:52:37.893028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.482 [2024-10-01 13:52:37.893046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.482 [2024-10-01 13:52:37.893121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.482 [2024-10-01 13:52:37.893163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.482 [2024-10-01 13:52:37.893189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.893228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.893266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.903812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.903957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.903990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.904008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.904046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.904082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.904099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.904113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.904148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.914192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.914316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.914348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.914366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.914977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.915188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.915223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.915241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.915353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.924707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.924823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.924855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.924874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.924926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.924971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.924988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.925003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.925048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.934885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.935039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.935084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.935103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.935150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.935189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.935206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.935220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.935255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.945035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.945159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.945191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.945209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.945246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.945282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.945300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.945314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.945349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.955134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.955253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.955285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.955304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.955342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.955378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.955396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.955412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.955447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.965236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.966563] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.966607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.966629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.967412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.967754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.967791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.967809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.967884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.976504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.976818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.976861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.976881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.977789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.978551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.978589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.978612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.978714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.986612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.986733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.986764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.986783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.986830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.986865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.986884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.986899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.986949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.483 [2024-10-01 13:52:37.997135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.483 [2024-10-01 13:52:37.997510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.483 [2024-10-01 13:52:37.997555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.483 [2024-10-01 13:52:37.997576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.483 [2024-10-01 13:52:37.997653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.483 [2024-10-01 13:52:37.997707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.483 [2024-10-01 13:52:37.997727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.483 [2024-10-01 13:52:37.997743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.483 [2024-10-01 13:52:37.997811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.008390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.008526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.008559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.008578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.008617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.008652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.008670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.008685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.008720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.018886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.019016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.019049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.019068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.019651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.019839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.019874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.019892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.020030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.029643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.029762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.029795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.029814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.029851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.029888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.029905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.029937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.029975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.039742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.040098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.040169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.040192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.040339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.040471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.040499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.040516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.040561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.050607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.050731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.050764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.050782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.050820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.050855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.050873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.050888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.050939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.060711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.060826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.060857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.060876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.060929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.060970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.060988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.061003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.061038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.070894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.071042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.071074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.071092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.071129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.071196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.071217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.071232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.071268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.081194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.081319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.081352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.081370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.081413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.081448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.081466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.081480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.081514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.091301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.091418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.091450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.484 [2024-10-01 13:52:38.091469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.484 [2024-10-01 13:52:38.091507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.484 [2024-10-01 13:52:38.091544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.484 [2024-10-01 13:52:38.091562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.484 [2024-10-01 13:52:38.091577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.484 [2024-10-01 13:52:38.091611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.484 [2024-10-01 13:52:38.101408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.484 [2024-10-01 13:52:38.101525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.484 [2024-10-01 13:52:38.101557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.101576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.101612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.101647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.101665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.101681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.101716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.111514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.111632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.111665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.111683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.111720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.111756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.111773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.111788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.111823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.121611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.121727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.121760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.121778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.121815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.121851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.121880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.121895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.121948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.131720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.131855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.131887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.131906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.131961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.131999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.132018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.132033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.132068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.141847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.142021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.142055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.142105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.142147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.142183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.142202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.142218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.142253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.151988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.152117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.152150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.152168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.152206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.152243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.152260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.152275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.152310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.162094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.162225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.162258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.162277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.162316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.162352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.162370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.162386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.162421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.172294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.172444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.172478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.172497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.172536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.172572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.172590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.172640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.172679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.182421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.182602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.182640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.182660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.182700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.182737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.182754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.182770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.182806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.192719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.192885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.192934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.192957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.192997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.485 [2024-10-01 13:52:38.193034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.485 [2024-10-01 13:52:38.193051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.485 [2024-10-01 13:52:38.193067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.485 [2024-10-01 13:52:38.193103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.485 [2024-10-01 13:52:38.203029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.485 [2024-10-01 13:52:38.203188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.485 [2024-10-01 13:52:38.203222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.485 [2024-10-01 13:52:38.203242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.485 [2024-10-01 13:52:38.203281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.203318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.203336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.203353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.203397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.213154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.213338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.213372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.213391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.213430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.213468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.213486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.213501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.213536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.223308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.223443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.223477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.223497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.223538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.223574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.223593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.223609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.223645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.233428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.233590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.233624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.233643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.233683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.233743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.233765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.233781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.233817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.243560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.243720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.243753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.243772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.243838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.243875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.243894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.243909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.243963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.253682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.253853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.253888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.253907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.253967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.254007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.254025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.254041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.254077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.263929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.264132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.264168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.264191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.264230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.264267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.264285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.264301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.264337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.274057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.274202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.274236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.274255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.274293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.274330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.274348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.274389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.274428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.284229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.284352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.284384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.284403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.284440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.284485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.284503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.284518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.284553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.294579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.294696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.294728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.294746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.294783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.294819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.294836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.294851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.294887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.486 [2024-10-01 13:52:38.304683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.486 [2024-10-01 13:52:38.304801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.486 [2024-10-01 13:52:38.304833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.486 [2024-10-01 13:52:38.304851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.486 [2024-10-01 13:52:38.304888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.486 [2024-10-01 13:52:38.304941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.486 [2024-10-01 13:52:38.304962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.486 [2024-10-01 13:52:38.304977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.486 [2024-10-01 13:52:38.305551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.316627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.316994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.317060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.317083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.317158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.317201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.317221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.317236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.317272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.327993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.328114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.328146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.328165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.328202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.328237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.328255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.328269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.328304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.338626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.338753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.338785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.338804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.339400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.339609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.339645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.339663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.339776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.348737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.349568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.349613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.349633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.349763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.349840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.349868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.349884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.349937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.358970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.359091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.359123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.359142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.359179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.359215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.359232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.359247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.359281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.369760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.369877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.369909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.369947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.370522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.370724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.370759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.370777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.370889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.380582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.380700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.380732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.380751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.380788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.380823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.380841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.380855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.380890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.391016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.391153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.391185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.391204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.391242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.391278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.391297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.391312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.391347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.401439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.401619] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.401661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.401680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.401718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.401755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.401773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.401789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.401825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.487 [2024-10-01 13:52:38.409174] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9b1a00 was disconnected and freed. reset controller. 00:18:34.487 [2024-10-01 13:52:38.409317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.409398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.411669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.487 [2024-10-01 13:52:38.411785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.487 [2024-10-01 13:52:38.411818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.487 [2024-10-01 13:52:38.411837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.487 [2024-10-01 13:52:38.411874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.487 [2024-10-01 13:52:38.411909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.487 [2024-10-01 13:52:38.411946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.487 [2024-10-01 13:52:38.411961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.487 [2024-10-01 13:52:38.411998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.412149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.412185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.412203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.412217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.412246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.420829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.420965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.420998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.488 [2024-10-01 13:52:38.421017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.421050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.421081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.421100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.421115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.421147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.422042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.422159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.422198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.488 [2024-10-01 13:52:38.422218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.422251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.422282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.422299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.422313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.422344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.431207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.431322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.431353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.488 [2024-10-01 13:52:38.431372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.431959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.432146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.432180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.432219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.432342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.432388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.432479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.432518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.488 [2024-10-01 13:52:38.432537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.432571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.432611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.432630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.432645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.432675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.441712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.441826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.441859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.488 [2024-10-01 13:52:38.441878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.441927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.441963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.441981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.441997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.442027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.442452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.442558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.442595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.488 [2024-10-01 13:52:38.442615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.443208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.443399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.443425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.443440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.443548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.452060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.452195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.452252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.488 [2024-10-01 13:52:38.452273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.452308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.452341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.452359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.452374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.452405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.452525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.452614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.452644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.488 [2024-10-01 13:52:38.452662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.453899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.454704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.488 [2024-10-01 13:52:38.454742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.488 [2024-10-01 13:52:38.454771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.488 [2024-10-01 13:52:38.455107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.488 [2024-10-01 13:52:38.462292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.488 [2024-10-01 13:52:38.462427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.488 [2024-10-01 13:52:38.462459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.488 [2024-10-01 13:52:38.462478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.488 [2024-10-01 13:52:38.462511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.488 [2024-10-01 13:52:38.462563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.462584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.462600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.462641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.462676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.462760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.462789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.489 [2024-10-01 13:52:38.462807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.462838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.464098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.464137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.464156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.464399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.472391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.472553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.472597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.489 [2024-10-01 13:52:38.472619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.472655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.472691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.472710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.472727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.472758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.472809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.473480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.473522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.489 [2024-10-01 13:52:38.473544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.473730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.473862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.473891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.473908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.473971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.482518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.482706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.482756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.489 [2024-10-01 13:52:38.482778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.482813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.484077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.484120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.484139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.484988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.485338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.485483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.485523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.489 [2024-10-01 13:52:38.485543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.485579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.485628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.485651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.485666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.485698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.492660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.492813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.492848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.489 [2024-10-01 13:52:38.492867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.492902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.492954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.492973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.492990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.493021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.496315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.496432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.496463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.489 [2024-10-01 13:52:38.496482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.496515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.496548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.496566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.496581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.496612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.502774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.502921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.502967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.489 [2024-10-01 13:52:38.503020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.503058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.503091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.503110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.503125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.503713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.506685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.506799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.506841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.489 [2024-10-01 13:52:38.506862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.507462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.507649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.507683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.507702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.507813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.489 [2024-10-01 13:52:38.512881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.489 [2024-10-01 13:52:38.513010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.489 [2024-10-01 13:52:38.513042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.489 [2024-10-01 13:52:38.513060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.489 [2024-10-01 13:52:38.514291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.489 [2024-10-01 13:52:38.515104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.489 [2024-10-01 13:52:38.515142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.489 [2024-10-01 13:52:38.515161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.489 [2024-10-01 13:52:38.515483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.517250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.517361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.517392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.490 [2024-10-01 13:52:38.517411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.517445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.517483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.517531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.517546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.517579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.522984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.523106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.523138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.490 [2024-10-01 13:52:38.523157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.523189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.523222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.523240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.523254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.524476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.527504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.527615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.527646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.490 [2024-10-01 13:52:38.527664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.527696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.527728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.527746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.527761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.527792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.533085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.533200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.533240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.490 [2024-10-01 13:52:38.533261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.533848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.534062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.534098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.534116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.534227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.537683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.537835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.537875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.490 [2024-10-01 13:52:38.537896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.537946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.537998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.538020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.538035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.538066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.544361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.545223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.545268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.490 [2024-10-01 13:52:38.545289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.545608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.545698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.545724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.545739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.545773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.547811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.547936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.547968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.490 [2024-10-01 13:52:38.547987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.548020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.548052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.548069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.548083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.548114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.555613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.556432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.556476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.490 [2024-10-01 13:52:38.556497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.556635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.556675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.556694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.556708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.556741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.557901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.558024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.558063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.490 [2024-10-01 13:52:38.558083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.559322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.560129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.560168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.560186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.560505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.566710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.566843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.566884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.490 [2024-10-01 13:52:38.566904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.490 [2024-10-01 13:52:38.567499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.490 [2024-10-01 13:52:38.567683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.490 [2024-10-01 13:52:38.567718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.490 [2024-10-01 13:52:38.567736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.490 [2024-10-01 13:52:38.567845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.490 [2024-10-01 13:52:38.567995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.490 [2024-10-01 13:52:38.568099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.490 [2024-10-01 13:52:38.568130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.490 [2024-10-01 13:52:38.568148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.568180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.568212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.568229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.568265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.569489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.577187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.577307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.577339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.491 [2024-10-01 13:52:38.577358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.577392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.577424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.577442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.577458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.577489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.578073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.578731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.578773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.491 [2024-10-01 13:52:38.578793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.578973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.579090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.579118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.579136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.579177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.587461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.587580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.587613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.491 [2024-10-01 13:52:38.587632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.587665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.587696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.587714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.587729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.587761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.590293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.590442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.590509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.491 [2024-10-01 13:52:38.590532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.590580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.590613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.590631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.590645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.590677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.597562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.597681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.597714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.491 [2024-10-01 13:52:38.597733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.597782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.597819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.597837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.597852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.597883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.601236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.601346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.601378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.491 [2024-10-01 13:52:38.601396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.601429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.601461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.601478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.601493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.601524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.607670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.607811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.607851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.491 [2024-10-01 13:52:38.607872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.607907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.607994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.608014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.608029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.608061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.611682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.611799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.611838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.491 [2024-10-01 13:52:38.611858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.612466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.612659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.612694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.612712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.612840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.617782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.617926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.617960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.491 [2024-10-01 13:52:38.617980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.618013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.618046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.491 [2024-10-01 13:52:38.618064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.491 [2024-10-01 13:52:38.618080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.491 [2024-10-01 13:52:38.618112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.491 [2024-10-01 13:52:38.622353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.491 [2024-10-01 13:52:38.622478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.491 [2024-10-01 13:52:38.622510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.491 [2024-10-01 13:52:38.622529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.491 [2024-10-01 13:52:38.622588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.491 [2024-10-01 13:52:38.622621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.622638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.622652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.622727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.627884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.628038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.628070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.492 [2024-10-01 13:52:38.628088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.628121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.628152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.628170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.628185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.628216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.632751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.632885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.632932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.492 [2024-10-01 13:52:38.632953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.632987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.633018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.633036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.633050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.633081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.637992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.638112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.638144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.492 [2024-10-01 13:52:38.638163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.638196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.638227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.638244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.638257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.638289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.642956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.643071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.643104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.492 [2024-10-01 13:52:38.643146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.643181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.643213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.643230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.643244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.643274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.648089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.648204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.648236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.492 [2024-10-01 13:52:38.648254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.648286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.648318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.648335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.648349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.648379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.653050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.653158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.653190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.492 [2024-10-01 13:52:38.653208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.653240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.653271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.653288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.653302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.653332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.658183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.658299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.658330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.492 [2024-10-01 13:52:38.658349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.658382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.658413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.658450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.658466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.658498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.663137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.663256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.663287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.492 [2024-10-01 13:52:38.663305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.663338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.663369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.663386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.663401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.664629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.668280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.668396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.668426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.492 [2024-10-01 13:52:38.668445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.668478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.669072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.492 [2024-10-01 13:52:38.669111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.492 [2024-10-01 13:52:38.669130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.492 [2024-10-01 13:52:38.669299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.492 [2024-10-01 13:52:38.673232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.492 [2024-10-01 13:52:38.673360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.492 [2024-10-01 13:52:38.673393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.492 [2024-10-01 13:52:38.673411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.492 [2024-10-01 13:52:38.673444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.492 [2024-10-01 13:52:38.673475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.673493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.673507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.673537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.678370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.679725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.679772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.493 [2024-10-01 13:52:38.679793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.680580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.680941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.680985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.681003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.681077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.683327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.683442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.683474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.493 [2024-10-01 13:52:38.683492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.683525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.684119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.684158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.684177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.684357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.688491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.688604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.688635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.493 [2024-10-01 13:52:38.688652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.689877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.690119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.690148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.690164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.691108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.694610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.695470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.695514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.493 [2024-10-01 13:52:38.695536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.695881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.695986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.696013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.696028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.696061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.699227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.699348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.699380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.493 [2024-10-01 13:52:38.699397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.699431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.699462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.699479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.699493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.699523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.705786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.706612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.706657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.493 [2024-10-01 13:52:38.706677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.706774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.706812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.706830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.706845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.706875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.710356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.710702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.710746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.493 [2024-10-01 13:52:38.710767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.710837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.710875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.710893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.710947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.710983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.716983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.717118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.717152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.493 [2024-10-01 13:52:38.717171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.717757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.717965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.717996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.718012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.718145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.721635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.721753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.721784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.493 [2024-10-01 13:52:38.721802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.493 [2024-10-01 13:52:38.721835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.493 [2024-10-01 13:52:38.721866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.493 [2024-10-01 13:52:38.721882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.493 [2024-10-01 13:52:38.721897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.493 [2024-10-01 13:52:38.721945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.493 [2024-10-01 13:52:38.727525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.493 [2024-10-01 13:52:38.727650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.493 [2024-10-01 13:52:38.727682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.494 [2024-10-01 13:52:38.727701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.727734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.727765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.727783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.727797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.727828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.731993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.732110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.732171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.494 [2024-10-01 13:52:38.732191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.732772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.732990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.733018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.733033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.733158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.737806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.737945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.737978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.494 [2024-10-01 13:52:38.737996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.738030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.738060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.738078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.738092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.738122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.742548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.742662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.742693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.494 [2024-10-01 13:52:38.742712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.742744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.742775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.742791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.742806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.742836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.747959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.748073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.748105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.494 [2024-10-01 13:52:38.748123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.748156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.748227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.748250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.748265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.748296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.752793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.752924] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.752957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.494 [2024-10-01 13:52:38.752975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.753008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.753051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.753071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.753085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.753116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.758062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.758175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.758206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.494 [2024-10-01 13:52:38.758224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.758256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.758286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.758303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.758317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.758347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.762885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.763009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.763041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.494 [2024-10-01 13:52:38.763059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.763091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.763142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.763161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.763175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.763222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.768157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.768272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.768304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.494 [2024-10-01 13:52:38.768322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.769555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.770367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.770408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.770427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.770758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.772985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.773102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.773133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.494 [2024-10-01 13:52:38.773151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.773183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.773224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.773244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.773259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.773289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.778250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.778362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.494 [2024-10-01 13:52:38.778393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.494 [2024-10-01 13:52:38.778411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.494 [2024-10-01 13:52:38.778443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.494 [2024-10-01 13:52:38.778474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.494 [2024-10-01 13:52:38.778491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.494 [2024-10-01 13:52:38.778505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.494 [2024-10-01 13:52:38.779740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.494 [2024-10-01 13:52:38.783074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.494 [2024-10-01 13:52:38.783189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.783221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.495 [2024-10-01 13:52:38.783258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.783293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.784517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.784557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.784576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.785354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.788340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.789018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.789063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.495 [2024-10-01 13:52:38.789084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.789243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.789365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.789400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.789418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.789458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.793168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.793292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.793324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.495 [2024-10-01 13:52:38.793342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.793375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.793417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.793435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.793450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.794676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.800339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.800675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.800720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.495 [2024-10-01 13:52:38.800740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.800810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.800849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.800882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.800898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.800948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.803269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.803945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.803989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.495 [2024-10-01 13:52:38.804009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.804223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.804351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.804373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.804387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.804426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.811493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.811618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.811650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.495 [2024-10-01 13:52:38.811669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.811702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.811733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.811750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.811764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.811795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.815275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.815611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.815655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.495 [2024-10-01 13:52:38.815677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.815747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.815785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.815803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.815818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.815849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.821790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.821927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.821960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.495 [2024-10-01 13:52:38.821978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.822570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.822759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.822835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.822854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.822980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.826478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.826612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.826644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.495 [2024-10-01 13:52:38.826662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.826695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.826726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.826743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.826757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.826788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.832377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.832501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.832534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.495 [2024-10-01 13:52:38.832552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.832587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.832618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.832636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.832651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.832681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.836892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.837029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.837061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.495 [2024-10-01 13:52:38.837079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.495 [2024-10-01 13:52:38.837696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.495 [2024-10-01 13:52:38.837889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.495 [2024-10-01 13:52:38.837939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.495 [2024-10-01 13:52:38.837958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.495 [2024-10-01 13:52:38.838093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.495 [2024-10-01 13:52:38.842683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.495 [2024-10-01 13:52:38.842807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.495 [2024-10-01 13:52:38.842839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.495 [2024-10-01 13:52:38.842857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.842889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.842937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.842959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.842973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.843004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.847415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.847530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.847561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.496 [2024-10-01 13:52:38.847578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.847611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.847642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.847659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.847673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.847702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.852786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.852900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.852949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.496 [2024-10-01 13:52:38.852969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.853003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.853034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.853051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.853085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.853118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.857590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.857707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.857740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.496 [2024-10-01 13:52:38.857758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.857790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.857822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.857839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.857853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.857883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.862883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.863010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.863042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.496 [2024-10-01 13:52:38.863060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.863092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.863123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.863140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.863153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.863183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 8041.60 IOPS, 31.41 MiB/s [2024-10-01 13:52:38.870471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.871635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.871681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.496 [2024-10-01 13:52:38.871702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.872637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.872844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.872879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.872897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.873023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.875035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.875455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.875499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.496 [2024-10-01 13:52:38.875520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.875591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.875630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.875648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.875662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.875693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.881632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.881754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.881787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.496 [2024-10-01 13:52:38.881805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.882394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.882594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.882631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.882649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.882758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.886239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.886356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.886387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.496 [2024-10-01 13:52:38.886405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.886438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.886469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.886485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.886499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.886530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.892150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.892269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.892302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.496 [2024-10-01 13:52:38.892319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.892352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.892403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.892421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.892435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.892466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.896605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.896720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.896751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.496 [2024-10-01 13:52:38.896769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.897358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.897564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.496 [2024-10-01 13:52:38.897601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.496 [2024-10-01 13:52:38.897620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.496 [2024-10-01 13:52:38.897727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.496 [2024-10-01 13:52:38.902379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.496 [2024-10-01 13:52:38.902494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.496 [2024-10-01 13:52:38.902525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.496 [2024-10-01 13:52:38.902558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.496 [2024-10-01 13:52:38.902593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.496 [2024-10-01 13:52:38.902623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.902639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.902653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.902684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.907159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.907276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.907308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.497 [2024-10-01 13:52:38.907325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.907358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.907400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.907419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.907434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.907483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.912559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.912680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.912712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.497 [2024-10-01 13:52:38.912730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.912763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.912794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.912812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.912826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.912857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.917529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.917660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.917692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.497 [2024-10-01 13:52:38.917710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.917744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.917775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.917793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.917808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.917838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.922655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.922776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.922808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.497 [2024-10-01 13:52:38.922826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.922859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.922890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.922907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.922940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.922973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.927753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.927869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.927901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.497 [2024-10-01 13:52:38.927968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.928005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.928036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.928054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.928068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.928099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.932747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.932864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.932896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.497 [2024-10-01 13:52:38.932930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.932966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.932998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.933014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.933028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.933058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.937842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.937971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.938003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.497 [2024-10-01 13:52:38.938020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.938053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.938084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.938101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.938115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.938145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.942863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.942991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.943022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.497 [2024-10-01 13:52:38.943039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.943071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.943102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.943137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.943152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.943184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.497 [2024-10-01 13:52:38.947951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.497 [2024-10-01 13:52:38.948066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.497 [2024-10-01 13:52:38.948098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.497 [2024-10-01 13:52:38.948115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.497 [2024-10-01 13:52:38.948150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.497 [2024-10-01 13:52:38.949361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.497 [2024-10-01 13:52:38.949400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.497 [2024-10-01 13:52:38.949419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.497 [2024-10-01 13:52:38.950179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.952966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.953077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.953108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.498 [2024-10-01 13:52:38.953126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.953158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.953189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.953206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.953220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.953788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.958045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.958158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.958189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.498 [2024-10-01 13:52:38.958207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.958238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.958269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.958286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.958300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.959522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.963058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.964359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.964405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.498 [2024-10-01 13:52:38.964426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.965199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.965538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.965577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.965595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.965667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.968136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.968789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.968833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.498 [2024-10-01 13:52:38.968854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.969048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.969166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.969187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.969202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.969241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.973148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.974471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.974515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.498 [2024-10-01 13:52:38.974546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.974761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.975674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.975712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.975731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.976475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.980016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.980365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.980409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.498 [2024-10-01 13:52:38.980430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.980528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.980570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.980588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.980603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.980634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.983824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.983970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.984002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.498 [2024-10-01 13:52:38.984020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.984056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.984088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.984106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.984120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.984151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.991202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.991326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.991358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.498 [2024-10-01 13:52:38.991376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.991409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.991441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.991457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.991471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.991502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:38.994906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:38.995256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:38.995300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.498 [2024-10-01 13:52:38.995321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:38.995391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:38.995429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:38.995446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:38.995480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:38.995515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:39.001438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:39.001552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:39.001583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.498 [2024-10-01 13:52:39.001602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:39.002192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:39.002380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:39.002416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:39.002435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:39.002556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.498 [2024-10-01 13:52:39.005997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.498 [2024-10-01 13:52:39.006110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.498 [2024-10-01 13:52:39.006141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.498 [2024-10-01 13:52:39.006159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.498 [2024-10-01 13:52:39.006191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.498 [2024-10-01 13:52:39.006222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.498 [2024-10-01 13:52:39.006239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.498 [2024-10-01 13:52:39.006253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.498 [2024-10-01 13:52:39.006284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.011933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.012047] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.012078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.499 [2024-10-01 13:52:39.012096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.012128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.012158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.012175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.012189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.012219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.016315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.016449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.016480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.499 [2024-10-01 13:52:39.016498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.017095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.017283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.017319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.017338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.017447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.022098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.022223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.022254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.499 [2024-10-01 13:52:39.022271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.022303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.022334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.022351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.022366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.022395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.026775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.026891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.026937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.499 [2024-10-01 13:52:39.026956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.026989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.027019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.027036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.027050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.027081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.032192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.032306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.032337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.499 [2024-10-01 13:52:39.032355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.032403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.032456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.032475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.032489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.032520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.037028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.037143] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.037175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.499 [2024-10-01 13:52:39.037193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.037225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.037256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.037273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.037288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.037319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.042285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.042398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.042429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.499 [2024-10-01 13:52:39.042447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.042479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.042509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.042527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.042555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.042589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.047129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.047256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.047288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.499 [2024-10-01 13:52:39.047313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.047346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.047377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.047394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.047408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.047460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.052376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.052498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.052530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.499 [2024-10-01 13:52:39.052549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.053783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.054613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.054655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.054675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.055016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.057236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.057353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.057385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.499 [2024-10-01 13:52:39.057404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.057437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.057468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.057485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.057500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.057531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.062475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.062618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.062650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.499 [2024-10-01 13:52:39.062669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.499 [2024-10-01 13:52:39.062703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.499 [2024-10-01 13:52:39.062735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.499 [2024-10-01 13:52:39.062753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.499 [2024-10-01 13:52:39.062768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.499 [2024-10-01 13:52:39.062800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.499 [2024-10-01 13:52:39.067328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.499 [2024-10-01 13:52:39.067443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.499 [2024-10-01 13:52:39.067475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.500 [2024-10-01 13:52:39.067521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.067557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.067588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.067606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.067620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.067651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.072591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.072706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.072738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.500 [2024-10-01 13:52:39.072756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.072788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.072819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.072836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.072850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.073442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.077418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.077533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.077565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.500 [2024-10-01 13:52:39.077583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.077616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.077647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.077664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.077678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.077709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.082688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.082803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.082835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.500 [2024-10-01 13:52:39.082853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.084076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.084857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.084930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.084953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.085276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.087510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.087625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.087656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.500 [2024-10-01 13:52:39.087675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.087708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.087739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.087756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.087771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.087802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.092778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.092892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.092941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.500 [2024-10-01 13:52:39.092961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.092994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.093025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.093042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.093057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.094277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.097600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.097713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.097745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.500 [2024-10-01 13:52:39.097763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.097795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.097826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.097843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.097856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.099099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.103540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.103732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.103765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.500 [2024-10-01 13:52:39.103783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.103824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.500 [2024-10-01 13:52:39.103859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.500 [2024-10-01 13:52:39.103876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.500 [2024-10-01 13:52:39.103891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.500 [2024-10-01 13:52:39.103938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.500 [2024-10-01 13:52:39.107687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.500 [2024-10-01 13:52:39.107801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.500 [2024-10-01 13:52:39.107832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.500 [2024-10-01 13:52:39.107850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.500 [2024-10-01 13:52:39.109076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.109324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.109354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.501 [2024-10-01 13:52:39.109369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.501 [2024-10-01 13:52:39.110268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.501 [2024-10-01 13:52:39.114930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.501 [2024-10-01 13:52:39.115087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.501 [2024-10-01 13:52:39.115120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.501 [2024-10-01 13:52:39.115138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.501 [2024-10-01 13:52:39.115172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.115204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.115221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.501 [2024-10-01 13:52:39.115236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.501 [2024-10-01 13:52:39.115266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.501 [2024-10-01 13:52:39.118575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.501 [2024-10-01 13:52:39.118705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.501 [2024-10-01 13:52:39.118737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.501 [2024-10-01 13:52:39.118755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.501 [2024-10-01 13:52:39.118813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.118846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.118864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.501 [2024-10-01 13:52:39.118878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.501 [2024-10-01 13:52:39.118923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.501 [2024-10-01 13:52:39.126004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.501 [2024-10-01 13:52:39.126149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.501 [2024-10-01 13:52:39.126182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.501 [2024-10-01 13:52:39.126200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.501 [2024-10-01 13:52:39.126236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.126268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.126285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.501 [2024-10-01 13:52:39.126301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.501 [2024-10-01 13:52:39.126333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.501 [2024-10-01 13:52:39.129789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.501 [2024-10-01 13:52:39.130158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.501 [2024-10-01 13:52:39.130204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.501 [2024-10-01 13:52:39.130225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.501 [2024-10-01 13:52:39.130298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.130338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.130356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.501 [2024-10-01 13:52:39.130371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.501 [2024-10-01 13:52:39.130423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.501 [2024-10-01 13:52:39.136415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.501 [2024-10-01 13:52:39.136561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.501 [2024-10-01 13:52:39.136593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.501 [2024-10-01 13:52:39.136611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.501 [2024-10-01 13:52:39.137220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.137411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.137440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.501 [2024-10-01 13:52:39.137488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.501 [2024-10-01 13:52:39.137602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.501 [2024-10-01 13:52:39.141123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.501 [2024-10-01 13:52:39.141241] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.501 [2024-10-01 13:52:39.141273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.501 [2024-10-01 13:52:39.141290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.501 [2024-10-01 13:52:39.141323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.141354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.141372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.501 [2024-10-01 13:52:39.141386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.501 [2024-10-01 13:52:39.141417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.501 [2024-10-01 13:52:39.146952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.501 [2024-10-01 13:52:39.147068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.501 [2024-10-01 13:52:39.147099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.501 [2024-10-01 13:52:39.147118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.501 [2024-10-01 13:52:39.147150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.501 [2024-10-01 13:52:39.147183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.501 [2024-10-01 13:52:39.147201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.147223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.147253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.151407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.151522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.151553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.502 [2024-10-01 13:52:39.151571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.152168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.152368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.152410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.152429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.152538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.157213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.157354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.157386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.502 [2024-10-01 13:52:39.157404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.157436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.157467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.157484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.157498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.157528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.161961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.162109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.162141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.502 [2024-10-01 13:52:39.162159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.162191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.162233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.162253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.162268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.162299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.167340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.167456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.167487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.502 [2024-10-01 13:52:39.167505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.167537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.167569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.167586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.167600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.167631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.172170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.172287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.172319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.502 [2024-10-01 13:52:39.172337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.172381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.172437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.172456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.172470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.172501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.177439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.177554] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.177585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.502 [2024-10-01 13:52:39.177602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.177635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.177666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.177683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.177697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.177728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.182275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.182389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.182421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.502 [2024-10-01 13:52:39.182439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.182484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.182519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.182549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.182567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.182599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.187533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.187647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.187679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.502 [2024-10-01 13:52:39.187697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.188933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.189705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.189745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.189765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.190127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.192366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.192482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.192513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.502 [2024-10-01 13:52:39.192531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.192563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.192594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.192611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.192625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.192655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.197623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.197738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.502 [2024-10-01 13:52:39.197770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.502 [2024-10-01 13:52:39.197787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.502 [2024-10-01 13:52:39.199030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.502 [2024-10-01 13:52:39.199284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.502 [2024-10-01 13:52:39.199323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.502 [2024-10-01 13:52:39.199341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.502 [2024-10-01 13:52:39.200247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.502 [2024-10-01 13:52:39.202459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.502 [2024-10-01 13:52:39.202586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.202618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.503 [2024-10-01 13:52:39.202636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.203857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.204661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.204701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.204721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.205055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.208404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.208528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.208559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.503 [2024-10-01 13:52:39.208599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.208633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.208664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.208682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.208696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.208727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.212545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.212659] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.212690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.503 [2024-10-01 13:52:39.212707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.213933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.214166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.214203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.214221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.215147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.219569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.219924] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.219968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.503 [2024-10-01 13:52:39.219988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.220060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.220099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.220117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.220131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.220161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.222638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.223319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.223366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.503 [2024-10-01 13:52:39.223387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.223553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.223667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.223715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.223733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.223776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.230878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.231022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.231054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.503 [2024-10-01 13:52:39.231073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.231107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.231139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.231156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.231170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.231202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.234707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.235073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.235117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.503 [2024-10-01 13:52:39.235138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.235210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.235249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.235267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.235282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.235314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.241362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.241516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.241548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.503 [2024-10-01 13:52:39.241567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.503 [2024-10-01 13:52:39.242174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.503 [2024-10-01 13:52:39.242368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.503 [2024-10-01 13:52:39.242396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.503 [2024-10-01 13:52:39.242413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.503 [2024-10-01 13:52:39.242572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.503 [2024-10-01 13:52:39.246055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.503 [2024-10-01 13:52:39.246179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.503 [2024-10-01 13:52:39.246211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.504 [2024-10-01 13:52:39.246229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.246262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.246294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.246311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.246325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.246356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.252055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.252179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.252211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.504 [2024-10-01 13:52:39.252229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.252262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.252293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.252311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.252326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.252358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.256516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.256632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.256663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.504 [2024-10-01 13:52:39.256682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.257288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.257476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.257503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.257518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.257627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.262323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.262439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.262471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.504 [2024-10-01 13:52:39.262516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.262578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.262614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.262632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.262646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.262676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.267043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.267156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.267186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.504 [2024-10-01 13:52:39.267204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.267236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.267267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.267284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.267298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.267330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.272482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.272595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.272627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.504 [2024-10-01 13:52:39.272645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.272693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.272727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.272745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.272759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.272789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.277299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.277414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.277445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.504 [2024-10-01 13:52:39.277463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.277495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.277526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.277543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.277574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.277607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.282591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.282703] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.282734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.504 [2024-10-01 13:52:39.282752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.504 [2024-10-01 13:52:39.282794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.504 [2024-10-01 13:52:39.282824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.504 [2024-10-01 13:52:39.282841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.504 [2024-10-01 13:52:39.282855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.504 [2024-10-01 13:52:39.282886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.504 [2024-10-01 13:52:39.287418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.504 [2024-10-01 13:52:39.287531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.504 [2024-10-01 13:52:39.287563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.504 [2024-10-01 13:52:39.287581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.287614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.287645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.287662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.287676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.287706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.292678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.292791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.292822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.505 [2024-10-01 13:52:39.292840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.294084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.294904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.294971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.294991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.295306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.297508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.297649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.297682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.505 [2024-10-01 13:52:39.297700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.297732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.297764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.297780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.297794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.297825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.302776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.302890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.302939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.505 [2024-10-01 13:52:39.302959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.302992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.303023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.303040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.303054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.304275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.307617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.307731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.307763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.505 [2024-10-01 13:52:39.307781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.307813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.309057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.309095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.309114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.309857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.312869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.312994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.313026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.505 [2024-10-01 13:52:39.313043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.313637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.313827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.313864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.313882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.314004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.317707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.317821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.317852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.505 [2024-10-01 13:52:39.317870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.317902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.317951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.317970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.317984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.318015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.324866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.325281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.325327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.505 [2024-10-01 13:52:39.325347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.325417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.325456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.325474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.325488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.325519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.327798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.327923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.327955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.505 [2024-10-01 13:52:39.327973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.505 [2024-10-01 13:52:39.328006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.505 [2024-10-01 13:52:39.328574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.505 [2024-10-01 13:52:39.328612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.505 [2024-10-01 13:52:39.328631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.505 [2024-10-01 13:52:39.328832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.505 [2024-10-01 13:52:39.336112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.505 [2024-10-01 13:52:39.336230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.505 [2024-10-01 13:52:39.336261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.505 [2024-10-01 13:52:39.336279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.336311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.336343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.336360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.336374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.336405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.339851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.340201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.340257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.506 [2024-10-01 13:52:39.340278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.340348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.340387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.340405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.340419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.340450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.346385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.346499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.346530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.506 [2024-10-01 13:52:39.346569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.347159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.347346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.347374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.347389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.347496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.351037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.351150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.351207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.506 [2024-10-01 13:52:39.351227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.351261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.351292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.351309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.351324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.351355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.356957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.357070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.357102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.506 [2024-10-01 13:52:39.357119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.357152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.357190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.357208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.357222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.357252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.361407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.361522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.361553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.506 [2024-10-01 13:52:39.361571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.362166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.362352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.362388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.362407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.362526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.367256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.367432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.367465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.506 [2024-10-01 13:52:39.367483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.367524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.367575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.367595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.367610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.367640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.371962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.372075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.372107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.506 [2024-10-01 13:52:39.372125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.372157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.372188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.372205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.372219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.372249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.377348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.377469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.506 [2024-10-01 13:52:39.377500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.506 [2024-10-01 13:52:39.377518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.506 [2024-10-01 13:52:39.377550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.506 [2024-10-01 13:52:39.377580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.506 [2024-10-01 13:52:39.377597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.506 [2024-10-01 13:52:39.377611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.506 [2024-10-01 13:52:39.377640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.506 [2024-10-01 13:52:39.382138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.506 [2024-10-01 13:52:39.382253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.382284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.507 [2024-10-01 13:52:39.382302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.507 [2024-10-01 13:52:39.382333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.507 [2024-10-01 13:52:39.382364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.507 [2024-10-01 13:52:39.382381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.507 [2024-10-01 13:52:39.382395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.507 [2024-10-01 13:52:39.382426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.507 [2024-10-01 13:52:39.387451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.507 [2024-10-01 13:52:39.387563] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.387595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.507 [2024-10-01 13:52:39.387613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.507 [2024-10-01 13:52:39.387645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.507 [2024-10-01 13:52:39.387676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.507 [2024-10-01 13:52:39.387693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.507 [2024-10-01 13:52:39.387707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.507 [2024-10-01 13:52:39.387737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.507 [2024-10-01 13:52:39.392251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.507 [2024-10-01 13:52:39.392366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.392398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.507 [2024-10-01 13:52:39.392416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.507 [2024-10-01 13:52:39.392448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.507 [2024-10-01 13:52:39.392496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.507 [2024-10-01 13:52:39.392517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.507 [2024-10-01 13:52:39.392532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.507 [2024-10-01 13:52:39.392563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.507 [2024-10-01 13:52:39.397545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.507 [2024-10-01 13:52:39.398859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.398906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.507 [2024-10-01 13:52:39.398941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.507 [2024-10-01 13:52:39.399682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.507 [2024-10-01 13:52:39.400037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.507 [2024-10-01 13:52:39.400076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.507 [2024-10-01 13:52:39.400095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.507 [2024-10-01 13:52:39.400167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.507 [2024-10-01 13:52:39.402346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.507 [2024-10-01 13:52:39.402455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.402486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.507 [2024-10-01 13:52:39.402524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.507 [2024-10-01 13:52:39.402573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.507 [2024-10-01 13:52:39.402618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.507 [2024-10-01 13:52:39.402637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.507 [2024-10-01 13:52:39.402651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.507 [2024-10-01 13:52:39.402681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.507 [2024-10-01 13:52:39.407640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.507 [2024-10-01 13:52:39.407756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.407787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.507 [2024-10-01 13:52:39.407805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.507 [2024-10-01 13:52:39.409035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.507 [2024-10-01 13:52:39.409291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.507 [2024-10-01 13:52:39.409330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.507 [2024-10-01 13:52:39.409349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.507 [2024-10-01 13:52:39.410252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.507 [2024-10-01 13:52:39.412434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.507 [2024-10-01 13:52:39.412546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.412577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.507 [2024-10-01 13:52:39.412594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.507 [2024-10-01 13:52:39.413810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.507 [2024-10-01 13:52:39.414637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.507 [2024-10-01 13:52:39.414678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.507 [2024-10-01 13:52:39.414697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.507 [2024-10-01 13:52:39.415035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.507 [2024-10-01 13:52:39.418480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.507 [2024-10-01 13:52:39.418623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.507 [2024-10-01 13:52:39.418655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.507 [2024-10-01 13:52:39.418673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.418706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.418737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.418754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.418791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.418823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.422522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.422653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.422684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.508 [2024-10-01 13:52:39.422702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.422734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.422765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.422781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.422795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.424029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.429623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.429981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.430024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.508 [2024-10-01 13:52:39.430045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.430115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.430153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.430171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.430185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.430217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.432624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.433296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.433341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.508 [2024-10-01 13:52:39.433362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.433521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.433636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.433665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.433682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.433723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.440847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.441002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.441035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.508 [2024-10-01 13:52:39.441053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.441086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.441117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.441134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.441148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.441178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.444822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.444990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.445023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.508 [2024-10-01 13:52:39.445041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.445075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.445106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.445123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.445137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.445168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.451141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.451255] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.451298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.508 [2024-10-01 13:52:39.451318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.451893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.452096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.452134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.452153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.452261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.455731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.455846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.455877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.508 [2024-10-01 13:52:39.455895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.455964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.455998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.456015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.456029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.456060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.461658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.508 [2024-10-01 13:52:39.461773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.508 [2024-10-01 13:52:39.461805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.508 [2024-10-01 13:52:39.461822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.508 [2024-10-01 13:52:39.461855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.508 [2024-10-01 13:52:39.461886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.508 [2024-10-01 13:52:39.461903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.508 [2024-10-01 13:52:39.461936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.508 [2024-10-01 13:52:39.461969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.508 [2024-10-01 13:52:39.466094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.466209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.466240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.509 [2024-10-01 13:52:39.466258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.466862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.467065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.467102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.467120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.467230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.471940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.472054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.472085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.509 [2024-10-01 13:52:39.472103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.472136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.472167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.472185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.472222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.472257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.476694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.476809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.476840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.509 [2024-10-01 13:52:39.476858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.476890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.476937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.476958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.476973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.477003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.482146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.482262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.482293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.509 [2024-10-01 13:52:39.482311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.482357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.482392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.482410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.482424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.482454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.487017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.487137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.487180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.509 [2024-10-01 13:52:39.487198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.487230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.487261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.487278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.487293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.487324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.492233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.492354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.492403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.509 [2024-10-01 13:52:39.492422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.492455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.492487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.492504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.492519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.492549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.497222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.497339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.497370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.509 [2024-10-01 13:52:39.497396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.497428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.497459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.497475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.497490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.497521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.502334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.502479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.502512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.509 [2024-10-01 13:52:39.502530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.502581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.502613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.502630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.502644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.503864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.509 [2024-10-01 13:52:39.507312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.509 [2024-10-01 13:52:39.507424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.509 [2024-10-01 13:52:39.507455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.509 [2024-10-01 13:52:39.507473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.509 [2024-10-01 13:52:39.507505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.509 [2024-10-01 13:52:39.507555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.509 [2024-10-01 13:52:39.507574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.509 [2024-10-01 13:52:39.507588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.509 [2024-10-01 13:52:39.507619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.512452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.512564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.512595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.510 [2024-10-01 13:52:39.512614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.512646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.512678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.512695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.512709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.513945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.517402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.518711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.518756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.510 [2024-10-01 13:52:39.518777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.519533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.519873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.519927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.519949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.520022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.523308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.523442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.523473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.510 [2024-10-01 13:52:39.523491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.523523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.523554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.523572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.523588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.523618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.527490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.527606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.527638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.510 [2024-10-01 13:52:39.527656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.528893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.529166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.529197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.529212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.530133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.534499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.534844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.534888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.510 [2024-10-01 13:52:39.534922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.534998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.535037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.535055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.535079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.535110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.538306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.538425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.538456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.510 [2024-10-01 13:52:39.538474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.538507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.538552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.538572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.538586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.538617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.545570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.545685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.545716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.510 [2024-10-01 13:52:39.545761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.545798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.545829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.545846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.545860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.545891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.549567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.549721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.549753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.510 [2024-10-01 13:52:39.549771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.510 [2024-10-01 13:52:39.549805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.510 [2024-10-01 13:52:39.549836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.510 [2024-10-01 13:52:39.549853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.510 [2024-10-01 13:52:39.549868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.510 [2024-10-01 13:52:39.549899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.510 [2024-10-01 13:52:39.555841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.510 [2024-10-01 13:52:39.555973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.510 [2024-10-01 13:52:39.556005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.510 [2024-10-01 13:52:39.556030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.556606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.556791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.556819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.556834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.556959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.560450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.560566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.560597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.511 [2024-10-01 13:52:39.560615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.560647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.560678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.560718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.560734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.560766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.566296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.566411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.566443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.511 [2024-10-01 13:52:39.566461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.566494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.566525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.566555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.566571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.566602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.570755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.570870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.570900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.511 [2024-10-01 13:52:39.570934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.571515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.571708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.571736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.571752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.571862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.576501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.576616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.576648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.511 [2024-10-01 13:52:39.576666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.576699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.576730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.576747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.576761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.576791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.581204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.581351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.581384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.511 [2024-10-01 13:52:39.581402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.581435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.581466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.581483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.581497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.581528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.586606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.586727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.586759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.511 [2024-10-01 13:52:39.586777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.586810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.586840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.586858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.586872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.586903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.591467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.591594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.591625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.511 [2024-10-01 13:52:39.591644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.591675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.591706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.591723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.591738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.591768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.596702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.596829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.511 [2024-10-01 13:52:39.596860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.511 [2024-10-01 13:52:39.596877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.511 [2024-10-01 13:52:39.596952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.511 [2024-10-01 13:52:39.596988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.511 [2024-10-01 13:52:39.597005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.511 [2024-10-01 13:52:39.597019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.511 [2024-10-01 13:52:39.597063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.511 [2024-10-01 13:52:39.601697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.511 [2024-10-01 13:52:39.601814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.601851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.512 [2024-10-01 13:52:39.601869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.601935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.601973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.601991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.602005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.602037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.606800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.606938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.606977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.512 [2024-10-01 13:52:39.606995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.607029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.607074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.607093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.607109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.607140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.611793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.611947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.611982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.512 [2024-10-01 13:52:39.612001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.612036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.612068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.612085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.612137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.612173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.616970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.617091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.617123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.512 [2024-10-01 13:52:39.617141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.617188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.617222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.617240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.617255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.617285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.621895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.622028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.622060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.512 [2024-10-01 13:52:39.622078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.622111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.622143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.622159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.622174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.622205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.627078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.627193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.627225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.512 [2024-10-01 13:52:39.627242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.627275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.627305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.627322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.627337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.627368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.632007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.632123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.632181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.512 [2024-10-01 13:52:39.632201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.632251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.632287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.632304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.632318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.632350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.637173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.637289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.637321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.512 [2024-10-01 13:52:39.637339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.638585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.512 [2024-10-01 13:52:39.639380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.512 [2024-10-01 13:52:39.639420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.512 [2024-10-01 13:52:39.639439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.512 [2024-10-01 13:52:39.639758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.512 [2024-10-01 13:52:39.642099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.512 [2024-10-01 13:52:39.642210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.512 [2024-10-01 13:52:39.642240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.512 [2024-10-01 13:52:39.642258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.512 [2024-10-01 13:52:39.642290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.642321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.642338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.642352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.642383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.647267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.647381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.647412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.513 [2024-10-01 13:52:39.647430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.647463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.648717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.648758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.648777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.649026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.652189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.652304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.652335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.513 [2024-10-01 13:52:39.652352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.653576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.654388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.654429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.654448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.654783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.658065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.658261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.658293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.513 [2024-10-01 13:52:39.658312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.658353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.658385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.658403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.658418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.658448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.662284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.662397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.662429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.513 [2024-10-01 13:52:39.662447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.663697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.663951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.663988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.664007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.664900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.669280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.669626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.669671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.513 [2024-10-01 13:52:39.669692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.669763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.669803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.669821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.669846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.669877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.672375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.673051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.673092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.513 [2024-10-01 13:52:39.673121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.673284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.673401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.673422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.673436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.673475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.680759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.680885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.680932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.513 [2024-10-01 13:52:39.680954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.680988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.681021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.681038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.681053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.681085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.683670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.684546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.684591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.513 [2024-10-01 13:52:39.684642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.684983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.685077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.685108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.685125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.685159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.513 [2024-10-01 13:52:39.691208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.513 [2024-10-01 13:52:39.691331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.513 [2024-10-01 13:52:39.691363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.513 [2024-10-01 13:52:39.691381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.513 [2024-10-01 13:52:39.691979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.513 [2024-10-01 13:52:39.692168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.513 [2024-10-01 13:52:39.692205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.513 [2024-10-01 13:52:39.692224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.513 [2024-10-01 13:52:39.692352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.695063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.695896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.695950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.514 [2024-10-01 13:52:39.695971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.696073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.696112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.696129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.696144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.696175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.701860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.702031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.702066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.514 [2024-10-01 13:52:39.702085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.702120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.702152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.702206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.702223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.702256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.706398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.706514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.706560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.514 [2024-10-01 13:52:39.706581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.707192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.707408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.707437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.707452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.707564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.712306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.712440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.712472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.514 [2024-10-01 13:52:39.712490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.712523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.712555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.712572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.712586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.712616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.717032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.717148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.717180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.514 [2024-10-01 13:52:39.717198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.717231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.717275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.717294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.717309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.717340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.722416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.722573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.722606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.514 [2024-10-01 13:52:39.722624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.722673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.722709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.722727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.722741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.722772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.727225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.727345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.727376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.514 [2024-10-01 13:52:39.727395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.727439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.727472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.727489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.727503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.727534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.732540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.732654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.732685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.514 [2024-10-01 13:52:39.732703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.732735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.732766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.514 [2024-10-01 13:52:39.732783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.514 [2024-10-01 13:52:39.732798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.514 [2024-10-01 13:52:39.733384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.514 [2024-10-01 13:52:39.737333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.514 [2024-10-01 13:52:39.737450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.514 [2024-10-01 13:52:39.737482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.514 [2024-10-01 13:52:39.737500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.514 [2024-10-01 13:52:39.737555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.514 [2024-10-01 13:52:39.737587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.737604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.737618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.737648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.742632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.743972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.744018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.515 [2024-10-01 13:52:39.744040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.744810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.745171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.745211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.745231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.745304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.747423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.747538] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.747570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.515 [2024-10-01 13:52:39.747588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.747620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.747651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.747668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.747683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.747714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.752740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.752870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.752902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.515 [2024-10-01 13:52:39.752938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.754176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.754416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.754453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.754495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.755427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.757510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.757625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.757656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.515 [2024-10-01 13:52:39.757674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.757707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.758963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.759003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.759022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.759791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.762851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.763524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.763569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.515 [2024-10-01 13:52:39.763591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.763758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.763877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.763898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.763927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.763971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.767595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.767711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.767743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.515 [2024-10-01 13:52:39.767761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.767794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.767826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.767844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.767858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.767889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.774904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.775343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.775417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.515 [2024-10-01 13:52:39.775441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.775515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.775553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.775572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.775587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.775618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.777690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.777802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.515 [2024-10-01 13:52:39.777833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.515 [2024-10-01 13:52:39.777851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.515 [2024-10-01 13:52:39.777883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.515 [2024-10-01 13:52:39.777931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.515 [2024-10-01 13:52:39.777952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.515 [2024-10-01 13:52:39.777966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.515 [2024-10-01 13:52:39.778558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.515 [2024-10-01 13:52:39.786123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.515 [2024-10-01 13:52:39.786243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.786275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.516 [2024-10-01 13:52:39.786292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.786324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.786355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.786372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.786386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.786418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.787778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.787887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.787932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.516 [2024-10-01 13:52:39.787953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.789174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.789990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.790029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.790048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.790372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.796392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.796508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.796540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.516 [2024-10-01 13:52:39.796558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.797151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.797338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.797374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.797392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.797501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.797866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.799176] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.799221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.516 [2024-10-01 13:52:39.799241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.799467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.800376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.800415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.800434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.801165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.806817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.806946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.806979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.516 [2024-10-01 13:52:39.806997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.807030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.807061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.807079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.807093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.807123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.808521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.808641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.808672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.516 [2024-10-01 13:52:39.808689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.808722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.808753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.808770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.808785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.808815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.817014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.817144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.817177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.516 [2024-10-01 13:52:39.817196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.817229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.817260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.817277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.817291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.817322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.819807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.819974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.820006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.516 [2024-10-01 13:52:39.820024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.820058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.820089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.516 [2024-10-01 13:52:39.820106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.516 [2024-10-01 13:52:39.820121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.516 [2024-10-01 13:52:39.820152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.516 [2024-10-01 13:52:39.827129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.516 [2024-10-01 13:52:39.827253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.516 [2024-10-01 13:52:39.827285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.516 [2024-10-01 13:52:39.827336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.516 [2024-10-01 13:52:39.827372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.516 [2024-10-01 13:52:39.827405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.827422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.827437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.827468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 [2024-10-01 13:52:39.830850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.830982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.831013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.517 [2024-10-01 13:52:39.831031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.517 [2024-10-01 13:52:39.831064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.517 [2024-10-01 13:52:39.831095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.831113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.831127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.831157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 [2024-10-01 13:52:39.837226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.837345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.837377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.517 [2024-10-01 13:52:39.837395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.517 [2024-10-01 13:52:39.837429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.517 [2024-10-01 13:52:39.837460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.837477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.837492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.837523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 [2024-10-01 13:52:39.841175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.841290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.841321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.517 [2024-10-01 13:52:39.841339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.517 [2024-10-01 13:52:39.841934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.517 [2024-10-01 13:52:39.842121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.842183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.842203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.842315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 [2024-10-01 13:52:39.847317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.847432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.847464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.517 [2024-10-01 13:52:39.847482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.517 [2024-10-01 13:52:39.847515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.517 [2024-10-01 13:52:39.847546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.847563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.847578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.848805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 [2024-10-01 13:52:39.851681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.851810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.851842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.517 [2024-10-01 13:52:39.851860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.517 [2024-10-01 13:52:39.851892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.517 [2024-10-01 13:52:39.851943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.851964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.851978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.852009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 [2024-10-01 13:52:39.857408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.857521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.857553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.517 [2024-10-01 13:52:39.857571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.517 [2024-10-01 13:52:39.857604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.517 [2024-10-01 13:52:39.857647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.857668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.857682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.858923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 [2024-10-01 13:52:39.861889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.862037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.862069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.517 [2024-10-01 13:52:39.862087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.517 [2024-10-01 13:52:39.862120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.517 [2024-10-01 13:52:39.862161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.517 [2024-10-01 13:52:39.862177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.517 [2024-10-01 13:52:39.862191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.517 [2024-10-01 13:52:39.862222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.517 8086.55 IOPS, 31.59 MiB/s [2024-10-01 13:52:39.869829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.517 [2024-10-01 13:52:39.871466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.517 [2024-10-01 13:52:39.871513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.518 [2024-10-01 13:52:39.871535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.872242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.872459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.872503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.872522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.872633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.872660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.872753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.872783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.518 [2024-10-01 13:52:39.872801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.874038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.874285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.874328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.874343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.875260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.880051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.880167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.880199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.518 [2024-10-01 13:52:39.880217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.880287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.880320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.880337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.880352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.880383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.882726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.883402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.883446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.518 [2024-10-01 13:52:39.883467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.883657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.883774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.883804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.883822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.883862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.890907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.891037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.891069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.518 [2024-10-01 13:52:39.891087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.891120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.891151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.891168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.891182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.891212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.894703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.895066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.895110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.518 [2024-10-01 13:52:39.895131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.895201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.895239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.895256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.895287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.895321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.901359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.901474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.901506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.518 [2024-10-01 13:52:39.901523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.902120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.902306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.902334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.902349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.902455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.906016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.906130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.906162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.518 [2024-10-01 13:52:39.906191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.906223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.906254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.906271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.906285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.906315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.911985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.912106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.912138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.518 [2024-10-01 13:52:39.912155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.518 [2024-10-01 13:52:39.912188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.518 [2024-10-01 13:52:39.912219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.518 [2024-10-01 13:52:39.912237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.518 [2024-10-01 13:52:39.912251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.518 [2024-10-01 13:52:39.912282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.518 [2024-10-01 13:52:39.916443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.518 [2024-10-01 13:52:39.916556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.518 [2024-10-01 13:52:39.916607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.519 [2024-10-01 13:52:39.916627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.917217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.917404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.917440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.917458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.917566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.922312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.922436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.922467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.519 [2024-10-01 13:52:39.922485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.922518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.922560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.922581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.922595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.922626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.927081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.927196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.927227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.519 [2024-10-01 13:52:39.927245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.927277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.927308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.927324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.927338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.927368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.932462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.932576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.932608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.519 [2024-10-01 13:52:39.932625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.932657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.932713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.932731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.932745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.932775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.937293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.937407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.937439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.519 [2024-10-01 13:52:39.937456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.937489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.937520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.937536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.937550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.937580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.942562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.942676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.942708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.519 [2024-10-01 13:52:39.942725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.942757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.942788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.942804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.942818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.942848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.947383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.947497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.947529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.519 [2024-10-01 13:52:39.947546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.947594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.947629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.947646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.947660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.947710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.952653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.952770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.952802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.519 [2024-10-01 13:52:39.952820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.954045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.954821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.954861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.954879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.955213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.957474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.957584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.957615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.519 [2024-10-01 13:52:39.957633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.519 [2024-10-01 13:52:39.957665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.519 [2024-10-01 13:52:39.957695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.519 [2024-10-01 13:52:39.957712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.519 [2024-10-01 13:52:39.957726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.519 [2024-10-01 13:52:39.958335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.519 [2024-10-01 13:52:39.962743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.519 [2024-10-01 13:52:39.964051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.519 [2024-10-01 13:52:39.964097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.520 [2024-10-01 13:52:39.964118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.964343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.965252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.965291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.965310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:39.966039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:39.967564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:39.968863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:39.968908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.520 [2024-10-01 13:52:39.968960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.969725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.970080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.970119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.970137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:39.970209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:39.973460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:39.973582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:39.973613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.520 [2024-10-01 13:52:39.973630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.973662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.973694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.973711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.973727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:39.973757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:39.977652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:39.977764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:39.977795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.520 [2024-10-01 13:52:39.977813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.979046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.979279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.979316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.979334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:39.980242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:39.984522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:39.984858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:39.984902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.520 [2024-10-01 13:52:39.984939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.985012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.985050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.985085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.985101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:39.985133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:39.988328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:39.988452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:39.988484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.520 [2024-10-01 13:52:39.988501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.988534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.988565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.988581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.988595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:39.988625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:39.995619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:39.995735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:39.995767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.520 [2024-10-01 13:52:39.995784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.995817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.995847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.995864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.995878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:39.995908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:39.999414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:39.999748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:39.999796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.520 [2024-10-01 13:52:39.999816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:39.999884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:39.999940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:39.999962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.520 [2024-10-01 13:52:39.999976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.520 [2024-10-01 13:52:40.000009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.520 [2024-10-01 13:52:40.005891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.520 [2024-10-01 13:52:40.006020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.520 [2024-10-01 13:52:40.006052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.520 [2024-10-01 13:52:40.006069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.520 [2024-10-01 13:52:40.006652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.520 [2024-10-01 13:52:40.006838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.520 [2024-10-01 13:52:40.006876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.006895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.007017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.010474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.010597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.010628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.521 [2024-10-01 13:52:40.010646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.010678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.010709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.010726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.010740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.010771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.016368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.016482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.016514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.521 [2024-10-01 13:52:40.016532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.016566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.016597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.016614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.016628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.016659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.020750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.020864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.020895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.521 [2024-10-01 13:52:40.020926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.021519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.021707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.021744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.021762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.021871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.026504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.026626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.026658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.521 [2024-10-01 13:52:40.026676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.026708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.026739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.026756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.026770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.026800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.031217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.031332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.031363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.521 [2024-10-01 13:52:40.031381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.031413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.031444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.031460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.031474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.031504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.036604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.036719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.036750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.521 [2024-10-01 13:52:40.036768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.036815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.036851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.036868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.036906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.036958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.041310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.041425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.041457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.521 [2024-10-01 13:52:40.041474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.041507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.041537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.041554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.041568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.041598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.521 [2024-10-01 13:52:40.046701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.521 [2024-10-01 13:52:40.046814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.521 [2024-10-01 13:52:40.046846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.521 [2024-10-01 13:52:40.046864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.521 [2024-10-01 13:52:40.047453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.521 [2024-10-01 13:52:40.047637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.521 [2024-10-01 13:52:40.047673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.521 [2024-10-01 13:52:40.047692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.521 [2024-10-01 13:52:40.047800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.051406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.051521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.051553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.522 [2024-10-01 13:52:40.051571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.051603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.051634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.051651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.051664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.051695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.058485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.058832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.058893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.522 [2024-10-01 13:52:40.058931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.059024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.059064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.059082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.059097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.059128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.062177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.062365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.062406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.522 [2024-10-01 13:52:40.062426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.062467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.062501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.062518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.062532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.062577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.069526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.069643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.069675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.522 [2024-10-01 13:52:40.069704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.069737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.069768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.069785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.069799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.069829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.073344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.073679] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.073723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.522 [2024-10-01 13:52:40.073744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.073814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.073869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.073889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.073903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.073951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.079924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.080048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.080079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.522 [2024-10-01 13:52:40.080098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.080679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.080868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.080924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.080946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.081058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.084649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.084767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.084798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.522 [2024-10-01 13:52:40.084817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.084850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.084882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.084899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.084929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.084964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.090592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.090738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.090770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.522 [2024-10-01 13:52:40.090788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.522 [2024-10-01 13:52:40.090822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.522 [2024-10-01 13:52:40.090860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.522 [2024-10-01 13:52:40.090878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.522 [2024-10-01 13:52:40.090893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.522 [2024-10-01 13:52:40.090978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.522 [2024-10-01 13:52:40.095109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.522 [2024-10-01 13:52:40.095226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.522 [2024-10-01 13:52:40.095257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.522 [2024-10-01 13:52:40.095275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.095859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.096049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.096077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.096092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.096211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.100899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.101034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.101066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.523 [2024-10-01 13:52:40.101083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.101116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.101147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.101165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.101179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.101209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.105612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.105725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.105757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.523 [2024-10-01 13:52:40.105775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.105807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.105838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.105854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.105869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.105899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.111005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.111117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.111148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.523 [2024-10-01 13:52:40.111183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.111234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.111271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.111288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.111303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.111333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.115865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.115993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.116025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.523 [2024-10-01 13:52:40.116043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.116075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.116106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.116123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.116137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.116168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.121093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.121206] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.121237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.523 [2024-10-01 13:52:40.121254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.121286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.121317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.121333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.121347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.121377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.125970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.126115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.126148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.523 [2024-10-01 13:52:40.126165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.126198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.126229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.126265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.126280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.126313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.131188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.131301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.131332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.523 [2024-10-01 13:52:40.131350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.132563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.133361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.133400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.133419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.133738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.136085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.136195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.136226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.523 [2024-10-01 13:52:40.136244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.136275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.136305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.136322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.136336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.136904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.141277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.141390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.523 [2024-10-01 13:52:40.141422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.523 [2024-10-01 13:52:40.141440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.523 [2024-10-01 13:52:40.142671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.523 [2024-10-01 13:52:40.142902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.523 [2024-10-01 13:52:40.142954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.523 [2024-10-01 13:52:40.142972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.523 [2024-10-01 13:52:40.143856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.523 [2024-10-01 13:52:40.146170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.523 [2024-10-01 13:52:40.147483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.147530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.524 [2024-10-01 13:52:40.147551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.148308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.148647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.148686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.148704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.148776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.152016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.152135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.152167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.524 [2024-10-01 13:52:40.152185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.152218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.152249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.152267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.152281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.152311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.156259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.157546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.157590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.524 [2024-10-01 13:52:40.157611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.157836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.158756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.158796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.158815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.159545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.163106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.163439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.163482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.524 [2024-10-01 13:52:40.163503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.163595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.163634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.163652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.163666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.163697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.166954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.167077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.167109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.524 [2024-10-01 13:52:40.167127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.167160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.167192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.167209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.167223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.167253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.174198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.174314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.174344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.524 [2024-10-01 13:52:40.174362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.174394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.174425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.174442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.174456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.174485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.177976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.178312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.178355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.524 [2024-10-01 13:52:40.178376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.178446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.178484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.178502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.178535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.178584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.184510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.184629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.184661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.524 [2024-10-01 13:52:40.184678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.185279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.185467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.185503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.185523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.185632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.524 [2024-10-01 13:52:40.189123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.524 [2024-10-01 13:52:40.189239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.524 [2024-10-01 13:52:40.189271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.524 [2024-10-01 13:52:40.189289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.524 [2024-10-01 13:52:40.189321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.524 [2024-10-01 13:52:40.189352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.524 [2024-10-01 13:52:40.189369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.524 [2024-10-01 13:52:40.189383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.524 [2024-10-01 13:52:40.189414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.195048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.195174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.195205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.525 [2024-10-01 13:52:40.195224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.195256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.195288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.195305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.195320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.195350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.199573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.199692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.199738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.525 [2024-10-01 13:52:40.199770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.200376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.200567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.200594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.200609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.200741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.205332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.205459] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.205490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.525 [2024-10-01 13:52:40.205509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.205542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.205573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.205591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.205605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.205635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.210067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.210181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.210212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.525 [2024-10-01 13:52:40.210229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.210262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.210293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.210310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.210323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.210354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.215429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.215544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.215574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.525 [2024-10-01 13:52:40.215593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.215625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.215674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.215693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.215707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.215738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.220223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.220340] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.220371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.525 [2024-10-01 13:52:40.220388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.220421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.220452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.220469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.220484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.220513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.225519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.225634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.225665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.525 [2024-10-01 13:52:40.225683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.225715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.225746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.225763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.225777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.226374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.230319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.230461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.230493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.525 [2024-10-01 13:52:40.230512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.230572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.230608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.230626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.230641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.230690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.237583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.237935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.237979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.525 [2024-10-01 13:52:40.238000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.238071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.238109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.238128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.238142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.238173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.240431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.240542] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.525 [2024-10-01 13:52:40.240573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.525 [2024-10-01 13:52:40.240591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.525 [2024-10-01 13:52:40.241177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.525 [2024-10-01 13:52:40.241358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.525 [2024-10-01 13:52:40.241385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.525 [2024-10-01 13:52:40.241400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.525 [2024-10-01 13:52:40.241507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.525 [2024-10-01 13:52:40.248662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.525 [2024-10-01 13:52:40.248784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.248816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.526 [2024-10-01 13:52:40.248833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.248865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.248896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.248929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.248947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.248979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.252393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.252726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.252769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.526 [2024-10-01 13:52:40.252810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.252900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.252959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.252978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.252993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.253024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.258877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.259001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.259033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.526 [2024-10-01 13:52:40.259051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.259622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.259807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.259844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.259861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.259984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.263461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.263580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.263611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.526 [2024-10-01 13:52:40.263628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.263661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.263692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.263709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.263723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.263764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.269389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.269506] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.269538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.526 [2024-10-01 13:52:40.269555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.269588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.269619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.269655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.269671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.269703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.273808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.273937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.273968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.526 [2024-10-01 13:52:40.273986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.274573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.274761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.274797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.274815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.274939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.279563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.279681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.279712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.526 [2024-10-01 13:52:40.279729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.279762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.279793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.279810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.279824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.279855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.284247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.284365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.284396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.526 [2024-10-01 13:52:40.284414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.284447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.284478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.284495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.284509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.284540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.289658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.289775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.289807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.526 [2024-10-01 13:52:40.289825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.289858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.289889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.289906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.289940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.526 [2024-10-01 13:52:40.289974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.526 [2024-10-01 13:52:40.294489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.526 [2024-10-01 13:52:40.294616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.526 [2024-10-01 13:52:40.294648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.526 [2024-10-01 13:52:40.294665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.526 [2024-10-01 13:52:40.294698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.526 [2024-10-01 13:52:40.294730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.526 [2024-10-01 13:52:40.294746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.526 [2024-10-01 13:52:40.294760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.294797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.299755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.299883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.299931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.527 [2024-10-01 13:52:40.299953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.299987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.300018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.300036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.300050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.300081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.304675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.304821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.304854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.527 [2024-10-01 13:52:40.304872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.304957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.304991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.305008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.305023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.305055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.309849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.309997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.310030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.527 [2024-10-01 13:52:40.310048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.310081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.310113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.310130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.310146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.310177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.314773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.314903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.314955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.527 [2024-10-01 13:52:40.314974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.315009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.315041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.315058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.315077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.315109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.319970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.320093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.320125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.527 [2024-10-01 13:52:40.320143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.320176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.320208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.320225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.320265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.320299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.324870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.324997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.325029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.527 [2024-10-01 13:52:40.325047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.325079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.325111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.325128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.325142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.325172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.330066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.330181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.330213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.527 [2024-10-01 13:52:40.330232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.330264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.330295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.330312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.330327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.330357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.334998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.335129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.335161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.527 [2024-10-01 13:52:40.335180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.527 [2024-10-01 13:52:40.335213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.527 [2024-10-01 13:52:40.335244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.527 [2024-10-01 13:52:40.335261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.527 [2024-10-01 13:52:40.335276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.527 [2024-10-01 13:52:40.335307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.527 [2024-10-01 13:52:40.340158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.527 [2024-10-01 13:52:40.340300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.527 [2024-10-01 13:52:40.340332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.528 [2024-10-01 13:52:40.340350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.340382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.340413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.340430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.340444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.341676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.345090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.345203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.345234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.528 [2024-10-01 13:52:40.345251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.345284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.345314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.345332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.345346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.345376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.350276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.350387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.350419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.528 [2024-10-01 13:52:40.350436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.350469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.350499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.350517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.350531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.350576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.355184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.355298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.355329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.528 [2024-10-01 13:52:40.355347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.355380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.355430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.355449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.355462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.356679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.360363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.360474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.360506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.528 [2024-10-01 13:52:40.360523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.361113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.361315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.361353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.361372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.361481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.365274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.365386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.365418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.528 [2024-10-01 13:52:40.365436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.365468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.365498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.365516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.365530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.365561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.372413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.372860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.372907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.528 [2024-10-01 13:52:40.372944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.373017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.373057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.373076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.373091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.373155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.375362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.375475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.375506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.528 [2024-10-01 13:52:40.375524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.376138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.376329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.376370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.376388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.376500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.383875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.384067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.384102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.528 [2024-10-01 13:52:40.384121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.384154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.384186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.384204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.384220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.384251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.385452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.385561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.385592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.528 [2024-10-01 13:52:40.385609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.528 [2024-10-01 13:52:40.386864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.528 [2024-10-01 13:52:40.387687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.528 [2024-10-01 13:52:40.387728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.528 [2024-10-01 13:52:40.387748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.528 [2024-10-01 13:52:40.388104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.528 [2024-10-01 13:52:40.394366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.528 [2024-10-01 13:52:40.394508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.528 [2024-10-01 13:52:40.394558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.529 [2024-10-01 13:52:40.394612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.395227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.395419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.395455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.395474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.395606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.395675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.395765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.395806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.529 [2024-10-01 13:52:40.395826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.395860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.395891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.395908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.395939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.397176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.404865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.404998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.405030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.529 [2024-10-01 13:52:40.405048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.405081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.405112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.405129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.405143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.405174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.405745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.406402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.406446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.529 [2024-10-01 13:52:40.406474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.406645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.406771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.406823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.406842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.406885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.415095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.415214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.415246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.529 [2024-10-01 13:52:40.415264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.415297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.415327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.415344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.415359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.415390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.417694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.418041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.418084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.529 [2024-10-01 13:52:40.418105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.418174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.418213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.418231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.418245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.418276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.425203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.425317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.425348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.529 [2024-10-01 13:52:40.425366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.425413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.425449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.425466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.425480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.425511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.428844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.428974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.429006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.529 [2024-10-01 13:52:40.429024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.429057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.429087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.429104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.429117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.429148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.435294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.435406] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.435437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.529 [2024-10-01 13:52:40.435455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.435488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.435519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.435536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.435550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.436137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.439147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.439266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.439297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.529 [2024-10-01 13:52:40.439315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.529 [2024-10-01 13:52:40.439891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.529 [2024-10-01 13:52:40.440102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.529 [2024-10-01 13:52:40.440138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.529 [2024-10-01 13:52:40.440156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.529 [2024-10-01 13:52:40.440265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.529 [2024-10-01 13:52:40.445387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.529 [2024-10-01 13:52:40.445501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.529 [2024-10-01 13:52:40.445532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.529 [2024-10-01 13:52:40.445550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.446797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.447596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.447637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.447657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.447993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.449650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.449761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.449793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.530 [2024-10-01 13:52:40.449811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.449844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.449875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.449892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.449906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.449955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.455489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.455603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.455633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.530 [2024-10-01 13:52:40.455651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.456876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.457115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.457154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.457172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.458072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.459946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.460058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.460100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.530 [2024-10-01 13:52:40.460120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.460152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.460183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.460200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.460232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.460266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.466125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.466365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.466407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.530 [2024-10-01 13:52:40.466427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.466547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.466599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.466620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.466635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.466666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.470039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.470155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.470196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.530 [2024-10-01 13:52:40.470216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.470249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.470280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.470296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.470310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.470342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.477386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.477721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.477765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.530 [2024-10-01 13:52:40.477785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.477855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.477893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.477926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.477944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.477976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.480134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.480260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.480293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.530 [2024-10-01 13:52:40.480311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.480343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.480927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.480964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.480983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.481184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.488440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.488558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.488589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.530 [2024-10-01 13:52:40.488607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.488639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.488670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.488686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.488700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.488731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.492216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.492550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.492594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.530 [2024-10-01 13:52:40.492615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.492684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.492722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.492740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.492754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.492785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.498697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.498810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.498842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.530 [2024-10-01 13:52:40.498859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.530 [2024-10-01 13:52:40.499450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.530 [2024-10-01 13:52:40.499656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.530 [2024-10-01 13:52:40.499693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.530 [2024-10-01 13:52:40.499712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.530 [2024-10-01 13:52:40.499826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.530 [2024-10-01 13:52:40.503342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.530 [2024-10-01 13:52:40.503458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.530 [2024-10-01 13:52:40.503490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.530 [2024-10-01 13:52:40.503507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.503539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.503570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.503587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.503601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.503631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.509189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.509307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.509346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.531 [2024-10-01 13:52:40.509364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.509397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.509428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.509445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.509460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.509490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.513652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.513768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.513799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.531 [2024-10-01 13:52:40.513817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.514419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.514625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.514662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.514681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.514818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.519483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.519613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.519645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.531 [2024-10-01 13:52:40.519664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.519697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.519728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.519746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.519760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.519791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.524264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.524381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.524413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.531 [2024-10-01 13:52:40.524431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.524464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.524495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.524512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.524527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.524557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.529665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.529777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.529809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.531 [2024-10-01 13:52:40.529828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.529861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.529892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.529924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.529943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.529975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.534492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.534616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.534648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.531 [2024-10-01 13:52:40.534690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.534726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.534757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.534774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.534788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.534819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.539756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.539870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.539902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.531 [2024-10-01 13:52:40.539938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.539973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.540004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.540021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.540036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.540066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.544597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.544717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.544749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.531 [2024-10-01 13:52:40.544771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.544819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.544854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.544871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.544886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.544933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.549848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.549976] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.550017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.531 [2024-10-01 13:52:40.550035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.531 [2024-10-01 13:52:40.551270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.531 [2024-10-01 13:52:40.552048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.531 [2024-10-01 13:52:40.552105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.531 [2024-10-01 13:52:40.552125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.531 [2024-10-01 13:52:40.552445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.531 [2024-10-01 13:52:40.554694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.531 [2024-10-01 13:52:40.554812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.531 [2024-10-01 13:52:40.554843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.531 [2024-10-01 13:52:40.554861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.554893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.554942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.554962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.554976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.555009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.559954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.560068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.560099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.532 [2024-10-01 13:52:40.560117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.561332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.561580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.561618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.561636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.562547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.564783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.566085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.566130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.532 [2024-10-01 13:52:40.566150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.566937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.567283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.567322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.567341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.567413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.570674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.570795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.570827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.532 [2024-10-01 13:52:40.570845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.570877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.570908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.570945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.570959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.570991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.574868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.574995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.575027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.532 [2024-10-01 13:52:40.575044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.576257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.576488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.576524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.576542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.577443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.581756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.582109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.582153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.532 [2024-10-01 13:52:40.582173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.582244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.582282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.582300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.582314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.582349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.585607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.585727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.585759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.532 [2024-10-01 13:52:40.585777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.585831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.585869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.585887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.585901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.585948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.592968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.593089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.593121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.532 [2024-10-01 13:52:40.593148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.593181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.593212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.593229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.593243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.593274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.596781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.597135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.597179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.532 [2024-10-01 13:52:40.597200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.597270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.597309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.597327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.597341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.597373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.603376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.603495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.603526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.532 [2024-10-01 13:52:40.603544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.604151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.604340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.604377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.604427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.604554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.608102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.608219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.608250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.532 [2024-10-01 13:52:40.608267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.608300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.608331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.608348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.608363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.608394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.613961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.614082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.532 [2024-10-01 13:52:40.614115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.532 [2024-10-01 13:52:40.614133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.532 [2024-10-01 13:52:40.614166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.532 [2024-10-01 13:52:40.614198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.532 [2024-10-01 13:52:40.614227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.532 [2024-10-01 13:52:40.614241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.532 [2024-10-01 13:52:40.614272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.532 [2024-10-01 13:52:40.618465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.532 [2024-10-01 13:52:40.618594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.618626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.533 [2024-10-01 13:52:40.618644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.619259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.619449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.619486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.619504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.619616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.624268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.624429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.624461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.533 [2024-10-01 13:52:40.624479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.624525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.624558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.624575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.624589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.624620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.629034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.629149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.629180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.533 [2024-10-01 13:52:40.629198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.629230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.629261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.629278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.629292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.629322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.634396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.634525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.634572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.533 [2024-10-01 13:52:40.634592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.634625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.634671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.634693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.634708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.634738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.639251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.639369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.639399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.533 [2024-10-01 13:52:40.639417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.639450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.639507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.639526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.639541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.639572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.644486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.644602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.644633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.533 [2024-10-01 13:52:40.644651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.644684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.644715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.644732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.644746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.644776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.649365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.649482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.649514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.533 [2024-10-01 13:52:40.649532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.649578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.649638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.649676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.649708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.649763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.654811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.654956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.654989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.533 [2024-10-01 13:52:40.655008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.655042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.655074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.655091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.655106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.655167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.659580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.659704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.659735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.533 [2024-10-01 13:52:40.659753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.659786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.659817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.659834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.659847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.659878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.664992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.665117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.665149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.533 [2024-10-01 13:52:40.665167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.665216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.665251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.665269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.665283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.665314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.669813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.669943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.669974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.533 [2024-10-01 13:52:40.669993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.670026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.670057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.670074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.670089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.670120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.675087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.675202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.675233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.533 [2024-10-01 13:52:40.675285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.533 [2024-10-01 13:52:40.675320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.533 [2024-10-01 13:52:40.675900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.533 [2024-10-01 13:52:40.675952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.533 [2024-10-01 13:52:40.675972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.533 [2024-10-01 13:52:40.676139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.533 [2024-10-01 13:52:40.679905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.533 [2024-10-01 13:52:40.680039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.533 [2024-10-01 13:52:40.680071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.534 [2024-10-01 13:52:40.680089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.680121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.680153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.680170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.680184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.680215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.685182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.686500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.686596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.534 [2024-10-01 13:52:40.686618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.687615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.687757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.687783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.687798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.687831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.690015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.690126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.690158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.534 [2024-10-01 13:52:40.690176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.690208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.690240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.690281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.690297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.690895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.695280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.695395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.695426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.534 [2024-10-01 13:52:40.695444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.696685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.696946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.696982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.697001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.697898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.700106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.700229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.700270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.534 [2024-10-01 13:52:40.700290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.701539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.702336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.702375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.702395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.702737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.705366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.706053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.706094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.534 [2024-10-01 13:52:40.706114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.706280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.706408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.706429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.706444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.706483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.710198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.710317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.710349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.534 [2024-10-01 13:52:40.710367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.710400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.710431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.710449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.710463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.711721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.717390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.717741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.717785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.534 [2024-10-01 13:52:40.717807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.717879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.717941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.717968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.717986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.718018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.720297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.720407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.720438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.534 [2024-10-01 13:52:40.720456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.721055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.721262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.721299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.721317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.721436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.728681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.728837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.728870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.534 [2024-10-01 13:52:40.728889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.728973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.729008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.729026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.729041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.729073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.732365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.732781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.732827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.534 [2024-10-01 13:52:40.732848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.732936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.732977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.732996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.733011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.733043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.739142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.739880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.739939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.534 [2024-10-01 13:52:40.739963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.740137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.740258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.740280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.740297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.740338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.743857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.743998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.744031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.534 [2024-10-01 13:52:40.744050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.534 [2024-10-01 13:52:40.744083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.534 [2024-10-01 13:52:40.744114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.534 [2024-10-01 13:52:40.744131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.534 [2024-10-01 13:52:40.744174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.534 [2024-10-01 13:52:40.744209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.534 [2024-10-01 13:52:40.749712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.534 [2024-10-01 13:52:40.749831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.534 [2024-10-01 13:52:40.749862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.535 [2024-10-01 13:52:40.749880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.749932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.749968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.749986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.750000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.750031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.754195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.754311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.754343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.535 [2024-10-01 13:52:40.754362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.754971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.755171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.755208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.755224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.755333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.760012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.760136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.760169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.535 [2024-10-01 13:52:40.760187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.760220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.760251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.760269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.760284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.760315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.764710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.764853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.764884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.535 [2024-10-01 13:52:40.764902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.764951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.764983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.765000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.765014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.765077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.770104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.770217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.770248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.535 [2024-10-01 13:52:40.770266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.770299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.770330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.770347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.770361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.770391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.774908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.775039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.775076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.535 [2024-10-01 13:52:40.775094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.775138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.775171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.775188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.775202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.775233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.780193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.780307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.780338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.535 [2024-10-01 13:52:40.780356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.780388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.780439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.780458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.780473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.780503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.785029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.785144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.785176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.535 [2024-10-01 13:52:40.785193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.785225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.785272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.785293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.785308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.785339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.790283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.790397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.790428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.535 [2024-10-01 13:52:40.790446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.791673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.792459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.792499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.792519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.792838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.795120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.795233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.795264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.535 [2024-10-01 13:52:40.795282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.795315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.795346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.795362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.795376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.795425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.800375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.800493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.800525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.535 [2024-10-01 13:52:40.800543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.801767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.802034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.802065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.802081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.803003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.535 [2024-10-01 13:52:40.805208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.535 [2024-10-01 13:52:40.805320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.535 [2024-10-01 13:52:40.805352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.535 [2024-10-01 13:52:40.805370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.535 [2024-10-01 13:52:40.806617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.535 [2024-10-01 13:52:40.807433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.535 [2024-10-01 13:52:40.807472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.535 [2024-10-01 13:52:40.807491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.535 [2024-10-01 13:52:40.807812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.811270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.811399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.811431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.536 [2024-10-01 13:52:40.811450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.811483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.811515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.811533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.811547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.811578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.815297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.815421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.815453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.536 [2024-10-01 13:52:40.815503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.815538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.815570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.815588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.815602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.816862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.822767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.822983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.823019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.536 [2024-10-01 13:52:40.823038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.823075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.823127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.823150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.823167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.823200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.825396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.825508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.825539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.536 [2024-10-01 13:52:40.825558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.826170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.826361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.826397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.826416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.826529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.833881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.834031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.834064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.536 [2024-10-01 13:52:40.834082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.834116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.834147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.834192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.834210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.834243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.835483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.835595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.835627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.536 [2024-10-01 13:52:40.835645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.836879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.837688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.837728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.837748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.838084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.844174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.844291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.844322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.536 [2024-10-01 13:52:40.844340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.844926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.845119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.845155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.845174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.845295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.845575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.845678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.845710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.536 [2024-10-01 13:52:40.845727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.846967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.847206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.847243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.847261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.848170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.854703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.854825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.854858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.536 [2024-10-01 13:52:40.854875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.854909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.854961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.854978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.854992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.855022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.856359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.856478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.856508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.536 [2024-10-01 13:52:40.856526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.856559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.856590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.856607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.856621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.856651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.864851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.864988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.865021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.536 [2024-10-01 13:52:40.865039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.865073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.865104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.865122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.865136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.865167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.869312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 8126.67 IOPS, 31.74 MiB/s [2024-10-01 13:52:40.870182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.870228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.536 [2024-10-01 13:52:40.870280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.870388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.870429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.870446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.870461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.871076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.875008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.875125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.536 [2024-10-01 13:52:40.875158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.536 [2024-10-01 13:52:40.875176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.536 [2024-10-01 13:52:40.875224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.536 [2024-10-01 13:52:40.875271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.536 [2024-10-01 13:52:40.875291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.536 [2024-10-01 13:52:40.875306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.536 [2024-10-01 13:52:40.875336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.536 [2024-10-01 13:52:40.879422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.536 [2024-10-01 13:52:40.879759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.879803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.879824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.879998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.880115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.880135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.880150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.880190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.885100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.885220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.885252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.537 [2024-10-01 13:52:40.885270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.885302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.885344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.885394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.885411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.885443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.890014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.890140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.890172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.890191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.890225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.890261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.890278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.890293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.890323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.895196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.895312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.895344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.537 [2024-10-01 13:52:40.895361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.895394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.895439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.895459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.895474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.896701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.900113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.900227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.900259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.900277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.900319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.900350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.900367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.900382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.900412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.905286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.905435] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.905466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.537 [2024-10-01 13:52:40.905484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.905528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.905571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.905591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.905605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.906862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.910207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.910320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.910351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.910369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.910401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.910432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.910449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.910464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.911705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.915411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.915528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.915560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.537 [2024-10-01 13:52:40.915578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.916177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.916374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.916411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.916430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.916540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.920296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.920417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.920448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.920466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.920517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.920550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.920567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.920581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.920612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.928271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.928393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.928426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.537 [2024-10-01 13:52:40.928444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.928477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.928508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.928525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.928539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.928570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.930400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.930511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.930555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.930575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.931165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.931349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.931395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.931413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.931541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.938650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.938765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.938798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.537 [2024-10-01 13:52:40.938816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.938848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.938878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.938895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.938962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.938999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.942408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.942755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.942803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.942833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.942905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.942961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.942979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.942994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.943025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.949044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.949162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.949194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.537 [2024-10-01 13:52:40.949211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.949792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.949994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.537 [2024-10-01 13:52:40.950030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.537 [2024-10-01 13:52:40.950049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.537 [2024-10-01 13:52:40.950158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.537 [2024-10-01 13:52:40.953709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.537 [2024-10-01 13:52:40.953823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.537 [2024-10-01 13:52:40.953854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.537 [2024-10-01 13:52:40.953871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.537 [2024-10-01 13:52:40.953904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.537 [2024-10-01 13:52:40.953955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.953974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.953989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.954019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.959649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.959763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.959809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.538 [2024-10-01 13:52:40.959828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.959862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.959893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.959925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.959943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.959976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.964107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.964222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.964254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.538 [2024-10-01 13:52:40.964271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.964842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.965046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.965083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.965101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.965210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.969966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.970098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.970130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.538 [2024-10-01 13:52:40.970148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.970182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.970214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.970231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.970246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.970277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.974760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.974889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.974936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.538 [2024-10-01 13:52:40.974957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.974992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.975058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.975089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.975104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.975135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.980125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.980243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.980275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.538 [2024-10-01 13:52:40.980294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.980327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.980359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.980382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.980396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.980426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.984858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.984990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.985023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.538 [2024-10-01 13:52:40.985041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.985074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.985105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.985122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.985136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.985166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.990215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.990328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.990360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.538 [2024-10-01 13:52:40.990377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.990979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.991189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.991229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.991247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.991356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:40.994962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:40.995078] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:40.995113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.538 [2024-10-01 13:52:40.995130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:40.995178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:40.995212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:40.995230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:40.995244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:40.996457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:41.001868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:41.002216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:41.002260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.538 [2024-10-01 13:52:41.002281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:41.002352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:41.002391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:41.002410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:41.002424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:41.002455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:41.005601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:41.005720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:41.005751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.538 [2024-10-01 13:52:41.005769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:41.005802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:41.005833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:41.005850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:41.005864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:41.005894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:41.012803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:41.012932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:41.012964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.538 [2024-10-01 13:52:41.013008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:41.013044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:41.013076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:41.013093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:41.013107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:41.013138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:41.016692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:41.016806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:41.016837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.538 [2024-10-01 13:52:41.016855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:41.016904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:41.016967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:41.016986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:41.017000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:41.017031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:41.022901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:41.023572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:41.023616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.538 [2024-10-01 13:52:41.023637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:41.023803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.538 [2024-10-01 13:52:41.023943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.538 [2024-10-01 13:52:41.023966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.538 [2024-10-01 13:52:41.023981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.538 [2024-10-01 13:52:41.024021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.538 [2024-10-01 13:52:41.027412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.538 [2024-10-01 13:52:41.027527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.538 [2024-10-01 13:52:41.027559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.538 [2024-10-01 13:52:41.027577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.538 [2024-10-01 13:52:41.027610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.027641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.027678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.027693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.027726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.033149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.033280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.033312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.539 [2024-10-01 13:52:41.033330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.033371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.033403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.033420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.033435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.033466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.037537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.038230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.038275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.539 [2024-10-01 13:52:41.038297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.038466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.038603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.038628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.038649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.038709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.043254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.043372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.043405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.539 [2024-10-01 13:52:41.043423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.043455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.043486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.043504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.043519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.043550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.047870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.048048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.048080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.539 [2024-10-01 13:52:41.048098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.048132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.048162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.048180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.048194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.048225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.053348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.053462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.053494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.539 [2024-10-01 13:52:41.053511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.053544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.053575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.053593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.053607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.053638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.058010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.058124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.058156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.539 [2024-10-01 13:52:41.058173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.058205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.058237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.058253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.058268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.058298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.064109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.064240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.064272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.539 [2024-10-01 13:52:41.064289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.064339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.064372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.064389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.064404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.064434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.068104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.068218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.068250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.539 [2024-10-01 13:52:41.068268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.069483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.069734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.069773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.069792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.070708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.075200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.075317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.075348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.539 [2024-10-01 13:52:41.075366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.075400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.075430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.075448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.075463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.075493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.078605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.078730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.078760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.539 [2024-10-01 13:52:41.078778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.078811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.078842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.078859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.078891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.078942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.085736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.085855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.085887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.539 [2024-10-01 13:52:41.085905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.085959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.085991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.086008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.086023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.086053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.089631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.089746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.089779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.539 [2024-10-01 13:52:41.089797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.089829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.089860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.089878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.089892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.089942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.095828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.096501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.096546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.539 [2024-10-01 13:52:41.096566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.096730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.096845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.096872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.096889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.539 [2024-10-01 13:52:41.096943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.539 [2024-10-01 13:52:41.100298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.539 [2024-10-01 13:52:41.100410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.539 [2024-10-01 13:52:41.100458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.539 [2024-10-01 13:52:41.100477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.539 [2024-10-01 13:52:41.100510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.539 [2024-10-01 13:52:41.100541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.539 [2024-10-01 13:52:41.100559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.539 [2024-10-01 13:52:41.100573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.100603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.105942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.106056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.106088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.106105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.106138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.106168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.106186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.106200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.106230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.111132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.111252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.111283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.540 [2024-10-01 13:52:41.111300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.111333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.111364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.111381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.111395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.111441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.116036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.116150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.116181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.116199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.116232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.116282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.116300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.116314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.116344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.121223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.121338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.121370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.540 [2024-10-01 13:52:41.121392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.121977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.122185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.122229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.122247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.122356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.126126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.126239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.126270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.126288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.127513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.127748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.127777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.127792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.128692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.133164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.133316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.133349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.540 [2024-10-01 13:52:41.133366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.133400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.133431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.133448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.133462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.133511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.136624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.136747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.136779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.136796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.136829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.136860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.136877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.136891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.136937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.143906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.144050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.144082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.540 [2024-10-01 13:52:41.144107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.144141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.144172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.144190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.144205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.144236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.147823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.148014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.148054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.148072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.148106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.148139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.148157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.148171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.148202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.154082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.154800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.154846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.540 [2024-10-01 13:52:41.154901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.155097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.155216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.155238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.155253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.155294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.158705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.158820] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.158852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.158869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.158902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.158954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.158973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.158988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.159018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.164424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.164573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.164605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.540 [2024-10-01 13:52:41.164623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.164657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.164689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.164706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.164721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.164753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.168889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.169028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.169059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.169078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.169727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.169987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.170044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.170061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.170217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.174664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.174786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.174818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.540 [2024-10-01 13:52:41.174836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.174868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.540 [2024-10-01 13:52:41.174900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.540 [2024-10-01 13:52:41.174935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.540 [2024-10-01 13:52:41.174951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.540 [2024-10-01 13:52:41.174984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.540 [2024-10-01 13:52:41.179312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.540 [2024-10-01 13:52:41.179429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.540 [2024-10-01 13:52:41.179460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.540 [2024-10-01 13:52:41.179478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.540 [2024-10-01 13:52:41.179510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.179541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.179558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.179572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.179602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.184763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.184888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.184933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.184954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.184987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.185018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.185035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.185049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.185081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.189406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.189544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.189576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.541 [2024-10-01 13:52:41.189594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.189625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.189656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.189673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.189687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.189717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.195524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.195646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.195678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.195696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.195743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.195777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.195794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.195808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.195838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.199519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.199633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.199664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.541 [2024-10-01 13:52:41.199681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.200900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.201142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.201188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.201206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.202101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.206596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.206711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.206742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.206759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.206815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.206847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.206864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.206878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.206927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.210005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.210125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.210156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.541 [2024-10-01 13:52:41.210173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.210206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.210237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.210254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.210268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.210297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.217226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.217339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.217370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.217388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.217420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.217450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.217466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.217480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.217511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.221104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.221218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.221249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.541 [2024-10-01 13:52:41.221268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.221315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.221349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.221367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.221399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.221432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.227321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.227989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.228032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.228053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.228217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.228331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.228351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.228366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.228404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.231759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.231873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.231904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.541 [2024-10-01 13:52:41.231940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.231974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.232005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.232022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.232037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.232067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.237416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.237530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.237560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.237577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.237610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.237641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.237658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.237672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.237703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.242571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.242695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.242743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.541 [2024-10-01 13:52:41.242763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.242796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.242846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.242867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.242882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.242931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.247509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.247625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.247657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.247675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.247708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.247739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.247756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.247770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.247800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.252665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.252780] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.252811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.541 [2024-10-01 13:52:41.252829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.253418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.253602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.541 [2024-10-01 13:52:41.253629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.541 [2024-10-01 13:52:41.253645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.541 [2024-10-01 13:52:41.253752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.541 [2024-10-01 13:52:41.257602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.541 [2024-10-01 13:52:41.257715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.541 [2024-10-01 13:52:41.257746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.541 [2024-10-01 13:52:41.257764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.541 [2024-10-01 13:52:41.259007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.541 [2024-10-01 13:52:41.259263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.259307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.259325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.260230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.264594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.264766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.264799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.542 [2024-10-01 13:52:41.264817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.264850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.264881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.264898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.264929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.264964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.268028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.268151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.268182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.542 [2024-10-01 13:52:41.268201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.268233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.268264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.268281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.268295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.268325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.275260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.275382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.275414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.542 [2024-10-01 13:52:41.275432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.275464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.275496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.275512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.275527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.275577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.279175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.279328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.279377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.542 [2024-10-01 13:52:41.279394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.279429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.279460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.279477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.279492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.279523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.285419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.285533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.285565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.542 [2024-10-01 13:52:41.285583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.286169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.286354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.286382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.286397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.286503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.289891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.290028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.290060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.542 [2024-10-01 13:52:41.290078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.290110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.290141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.290158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.290172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.290202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.295602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.295717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.295748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.542 [2024-10-01 13:52:41.295782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.295830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.295864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.295882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.295896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.295944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.299998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.300649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.300693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.542 [2024-10-01 13:52:41.300713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.300877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.301007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.301036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.301051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.301090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.305697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.305810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.305842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.542 [2024-10-01 13:52:41.305859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.305891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.305937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.305960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.305975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.306005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.310155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.310270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.310301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.542 [2024-10-01 13:52:41.310318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.310350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.310381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.310415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.310431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.310463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.315791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.315906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.315951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.542 [2024-10-01 13:52:41.315970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.316003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.317221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.317260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.317278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.317508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.320243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.320356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.320387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.542 [2024-10-01 13:52:41.320405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.320437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.320468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.320485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.320499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.320529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.326478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.326625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.326658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.542 [2024-10-01 13:52:41.326676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.326709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.326740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.326757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.326772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.326802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.330330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.542 [2024-10-01 13:52:41.330475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.542 [2024-10-01 13:52:41.330508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.542 [2024-10-01 13:52:41.330526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.542 [2024-10-01 13:52:41.330572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.542 [2024-10-01 13:52:41.330605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.542 [2024-10-01 13:52:41.330621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.542 [2024-10-01 13:52:41.330635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.542 [2024-10-01 13:52:41.330672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.542 [2024-10-01 13:52:41.337720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.543 [2024-10-01 13:52:41.338129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.543 [2024-10-01 13:52:41.338176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.543 [2024-10-01 13:52:41.338197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.543 [2024-10-01 13:52:41.338270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.543 [2024-10-01 13:52:41.338329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.543 [2024-10-01 13:52:41.338352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.543 [2024-10-01 13:52:41.338368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.543 [2024-10-01 13:52:41.338400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.543 [2024-10-01 13:52:41.340449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.543 [2024-10-01 13:52:41.340560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.543 [2024-10-01 13:52:41.340592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.543 [2024-10-01 13:52:41.340609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.543 [2024-10-01 13:52:41.340642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.543 [2024-10-01 13:52:41.340672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.543 [2024-10-01 13:52:41.340690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.543 [2024-10-01 13:52:41.340704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.543 [2024-10-01 13:52:41.340734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.543 [2024-10-01 13:52:41.349013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.543 [2024-10-01 13:52:41.349178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.543 [2024-10-01 13:52:41.349212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.543 [2024-10-01 13:52:41.349231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.543 [2024-10-01 13:52:41.349296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.543 [2024-10-01 13:52:41.349329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.543 [2024-10-01 13:52:41.349347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.543 [2024-10-01 13:52:41.349363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.543 [2024-10-01 13:52:41.349395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.543 [2024-10-01 13:52:41.350545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.543 [2024-10-01 13:52:41.351873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.543 [2024-10-01 13:52:41.351930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.543 [2024-10-01 13:52:41.351953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.543 [2024-10-01 13:52:41.352730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.543 [2024-10-01 13:52:41.353103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.543 [2024-10-01 13:52:41.353141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.543 [2024-10-01 13:52:41.353160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.543 [2024-10-01 13:52:41.353233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.543 [2024-10-01 13:52:41.359144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.543 [2024-10-01 13:52:41.359853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.543 [2024-10-01 13:52:41.359900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.543 [2024-10-01 13:52:41.359937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.543 [2024-10-01 13:52:41.360108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.543 [2024-10-01 13:52:41.360226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.543 [2024-10-01 13:52:41.360248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.543 [2024-10-01 13:52:41.360264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.543 [2024-10-01 13:52:41.360323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.543 [2024-10-01 13:52:41.362072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.543 [2024-10-01 13:52:41.363084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.543 [2024-10-01 13:52:41.363128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.543 [2024-10-01 13:52:41.363150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.543 [2024-10-01 13:52:41.363890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.364026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.364051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.364096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.364132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.369565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.369700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.369733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.544 [2024-10-01 13:52:41.369752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.369786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.369817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.369834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.369849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.369879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.372176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.372287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.372318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.544 [2024-10-01 13:52:41.372336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.373864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.374082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.374110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.374126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.374762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.379862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.380019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.380054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.544 [2024-10-01 13:52:41.380073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.380106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.380137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.380155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.380170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.380201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.382754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.382882] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.382960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.544 [2024-10-01 13:52:41.382982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.383017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.383049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.383066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.383081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.383112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.389986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.390126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.390159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.544 [2024-10-01 13:52:41.390183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.390231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.390266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.390284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.390299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.390330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.393590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.393704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.393736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.544 [2024-10-01 13:52:41.393754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.393787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.393818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.393834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.393849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.393880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.400096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.400244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.400277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.544 [2024-10-01 13:52:41.400296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.400890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.401123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.401160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.401180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.401293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.403818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.544 [2024-10-01 13:52:41.404537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.544 [2024-10-01 13:52:41.404584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.544 [2024-10-01 13:52:41.404606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.544 [2024-10-01 13:52:41.404778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.544 [2024-10-01 13:52:41.404899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.544 [2024-10-01 13:52:41.404935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.544 [2024-10-01 13:52:41.404952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.544 [2024-10-01 13:52:41.404994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.544 [2024-10-01 13:52:41.412517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.412749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.412786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.545 [2024-10-01 13:52:41.412806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.412843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.412876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.412894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.412926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.412964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.414413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.414525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.414571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.545 [2024-10-01 13:52:41.414590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.414623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.414655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.414672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.414688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.414752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.423433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.423572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.423604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.545 [2024-10-01 13:52:41.423622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.423656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.423688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.423706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.423721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.423751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.424588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.424698] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.424728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.545 [2024-10-01 13:52:41.424746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.424778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.424809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.424826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.424841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.424871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.433736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.434477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.434524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.545 [2024-10-01 13:52:41.434557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.434730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.434873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.434908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.434942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.434987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.435014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.435097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.435126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.545 [2024-10-01 13:52:41.435169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.435204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.435235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.435253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.435266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.436560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.444358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.444533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.444568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.545 [2024-10-01 13:52:41.444587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.444624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.444662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.444680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.444696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.444728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.445074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.445163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.445192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.545 [2024-10-01 13:52:41.445210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.445242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.445839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.445877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.445897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.446081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.454718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.454903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.454953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.545 [2024-10-01 13:52:41.454974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.455011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.455042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.455091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.455109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.455145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.455197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.455284] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.455314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.545 [2024-10-01 13:52:41.455332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.545 [2024-10-01 13:52:41.456575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.545 [2024-10-01 13:52:41.457383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.545 [2024-10-01 13:52:41.457422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.545 [2024-10-01 13:52:41.457441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.545 [2024-10-01 13:52:41.457768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.545 [2024-10-01 13:52:41.464963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.545 [2024-10-01 13:52:41.465111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.545 [2024-10-01 13:52:41.465147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.546 [2024-10-01 13:52:41.465167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.465203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.465239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.465258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.465274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.465315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.465350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.465440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.465469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.546 [2024-10-01 13:52:41.465486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.465518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.466791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.466821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.466836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.467091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.475079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.475274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.475315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.546 [2024-10-01 13:52:41.475334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.475370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.475407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.475426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.475441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.475483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.475518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.476192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.476237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.546 [2024-10-01 13:52:41.476257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.476442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.476586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.476614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.476631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.476671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.485241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.485399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.485433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.546 [2024-10-01 13:52:41.485452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.485487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.485523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.485541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.485557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.485588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.486858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.487717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.487762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.546 [2024-10-01 13:52:41.487784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.488153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.488246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.488271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.488287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.488339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.495408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.495546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.495579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.546 [2024-10-01 13:52:41.495598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.495631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.495666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.495686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.495701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.495739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.498369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.499216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.499261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.546 [2024-10-01 13:52:41.499282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.499385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.499423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.499443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.499457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.499490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.505512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.505641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.505672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.546 [2024-10-01 13:52:41.505690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.546 [2024-10-01 13:52:41.505724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.546 [2024-10-01 13:52:41.505755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.546 [2024-10-01 13:52:41.505773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.546 [2024-10-01 13:52:41.505813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.546 [2024-10-01 13:52:41.505846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.546 [2024-10-01 13:52:41.509436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.546 [2024-10-01 13:52:41.509553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.546 [2024-10-01 13:52:41.509584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.546 [2024-10-01 13:52:41.509602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.510203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.510417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.510453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.510472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.510596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.515610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.515726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.515758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.547 [2024-10-01 13:52:41.515776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.517019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.517790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.517829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.517849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.518182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.519947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.520061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.520109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.547 [2024-10-01 13:52:41.520130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.520163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.520194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.520212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.520226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.520256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.525701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.525816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.525887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.547 [2024-10-01 13:52:41.525923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.525961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.527206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.527248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.527267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.527511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.530257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.530391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.530424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.547 [2024-10-01 13:52:41.530443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.530489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.530523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.530553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.530570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.530602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.535798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.535967] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.536000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.547 [2024-10-01 13:52:41.536019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.536614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.536834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.536872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.536892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.537021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.540619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.540765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.540797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.547 [2024-10-01 13:52:41.540815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.540848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.540907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.540944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.540959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.540992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.545908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.546040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.546072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.547 [2024-10-01 13:52:41.546090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.546123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.546155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.546172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.546187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.547426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.550720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.550835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.550867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.547 [2024-10-01 13:52:41.550885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.550933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.550968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.550986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.551000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.551031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.556018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.556135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.556167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.547 [2024-10-01 13:52:41.556185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.556218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.556250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.547 [2024-10-01 13:52:41.556268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.547 [2024-10-01 13:52:41.556282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.547 [2024-10-01 13:52:41.556339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.547 [2024-10-01 13:52:41.560816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.547 [2024-10-01 13:52:41.560952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.547 [2024-10-01 13:52:41.560986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.547 [2024-10-01 13:52:41.561004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.547 [2024-10-01 13:52:41.561037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.547 [2024-10-01 13:52:41.561068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.561085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.561100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.561131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.566110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.566228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.566259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.548 [2024-10-01 13:52:41.566276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.566309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.566906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.566958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.566977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.567161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.570922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.571052] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.571084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.548 [2024-10-01 13:52:41.571102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.571135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.571166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.571183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.571197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.571228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.576205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.577514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.577560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.548 [2024-10-01 13:52:41.577609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.578377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.578732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.578773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.578791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.578863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.581013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.581124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.581156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.548 [2024-10-01 13:52:41.581173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.581205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.581236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.581252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.581266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.581848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.586301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.586418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.586449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.548 [2024-10-01 13:52:41.586467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.587711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.587982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.588020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.588039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.588948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.591105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.591219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.591251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.548 [2024-10-01 13:52:41.591269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.592494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.593303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.593371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.593391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.593712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.597163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.597289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.597320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.548 [2024-10-01 13:52:41.597337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.597384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.597419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.597437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.597452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.597482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.601199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.601313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.601344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.548 [2024-10-01 13:52:41.601362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.601394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.601425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.601441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.601456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.602688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.608329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.608665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.608709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.548 [2024-10-01 13:52:41.608730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.608800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.608838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.608857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.608871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.608902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.611291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.548 [2024-10-01 13:52:41.611413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.548 [2024-10-01 13:52:41.611444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.548 [2024-10-01 13:52:41.611462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.548 [2024-10-01 13:52:41.612049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.548 [2024-10-01 13:52:41.612232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.548 [2024-10-01 13:52:41.612269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.548 [2024-10-01 13:52:41.612287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.548 [2024-10-01 13:52:41.612396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.548 [2024-10-01 13:52:41.619544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.619668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.619699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.549 [2024-10-01 13:52:41.619717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.619750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.619781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.619798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.619812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.619842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.623371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.623712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.623756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.549 [2024-10-01 13:52:41.623777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.623848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.623886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.623904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.623934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.623969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.630038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.630166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.630198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.549 [2024-10-01 13:52:41.630217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.630856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.631080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.631118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.631137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.631277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.634825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.634970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.635004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.549 [2024-10-01 13:52:41.635023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.635057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.635088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.635105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.635119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.635169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.640897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.641076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.641111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.549 [2024-10-01 13:52:41.641129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.641165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.641198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.641228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.641243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.641276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.645479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.645605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.645637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.549 [2024-10-01 13:52:41.645655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.646273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.646464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.646500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.646557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.646674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.651346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.651475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.651507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.549 [2024-10-01 13:52:41.651524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.651557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.651589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.651607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.651621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.651651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.656160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.656277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.656308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.549 [2024-10-01 13:52:41.656326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.656359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.656390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.656408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.656422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.656453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.661694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.661827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.661859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.549 [2024-10-01 13:52:41.661878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.661927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.661963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.661981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.661996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.662027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.666621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.666753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.666809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.549 [2024-10-01 13:52:41.666830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.549 [2024-10-01 13:52:41.666865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.549 [2024-10-01 13:52:41.666897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.549 [2024-10-01 13:52:41.666931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.549 [2024-10-01 13:52:41.666949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.549 [2024-10-01 13:52:41.666982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.549 [2024-10-01 13:52:41.671796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.549 [2024-10-01 13:52:41.671946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.549 [2024-10-01 13:52:41.671978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.549 [2024-10-01 13:52:41.671997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.672031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.672063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.672080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.672095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.672125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.676992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.677132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.677165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.550 [2024-10-01 13:52:41.677183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.677217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.677249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.677266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.677281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.677312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.681940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.682064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.682096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.550 [2024-10-01 13:52:41.682115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.682148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.682215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.682234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.682249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.682279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.687098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.687233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.687266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.550 [2024-10-01 13:52:41.687284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.687317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.687349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.687366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.687381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.687411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.692266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.692394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.692427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.550 [2024-10-01 13:52:41.692445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.692492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.692527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.692546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.692560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.692600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.697322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.697445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.697477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.550 [2024-10-01 13:52:41.697495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.697528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.697560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.697577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.697591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.697663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.702366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.702484] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.702517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.550 [2024-10-01 13:52:41.702534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.702583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.702614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.702632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.702646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.702682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.707645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.707764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.707796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.550 [2024-10-01 13:52:41.707814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.707846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.707878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.707895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.707924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.707961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.712524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.712640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.712672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.550 [2024-10-01 13:52:41.712690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.712723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.712754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.712772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.712787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.712817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.717742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.717862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.717894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.550 [2024-10-01 13:52:41.717958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.717995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.718027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.550 [2024-10-01 13:52:41.718044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.550 [2024-10-01 13:52:41.718058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.550 [2024-10-01 13:52:41.718090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.550 [2024-10-01 13:52:41.722868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.550 [2024-10-01 13:52:41.723050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.550 [2024-10-01 13:52:41.723091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.550 [2024-10-01 13:52:41.723111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.550 [2024-10-01 13:52:41.723146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.550 [2024-10-01 13:52:41.723178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.723196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.723211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.723243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.727849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.727990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.728023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.551 [2024-10-01 13:52:41.728041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.728083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.728115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.728132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.728147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.728178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.732987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.733112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.733145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.551 [2024-10-01 13:52:41.733163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.733197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.733228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.733279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.733295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.733327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.738237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.738397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.738436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.551 [2024-10-01 13:52:41.738455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.738489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.738522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.738553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.738571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.738603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.743311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.743486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.743520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.551 [2024-10-01 13:52:41.743547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.743599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.743632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.743658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.743674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.743705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.748364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.748520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.748552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.551 [2024-10-01 13:52:41.748571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.748606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.748637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.748654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.748670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.748701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.753761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.753930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.753963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.551 [2024-10-01 13:52:41.753982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.754017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.754049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.754066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.754081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.754112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.758690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.758820] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.758852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.551 [2024-10-01 13:52:41.758871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.758905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.758954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.758973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.758988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.759029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.763883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.764025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.764058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.551 [2024-10-01 13:52:41.764077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.764110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.764142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.764159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.764174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.764205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.551 [2024-10-01 13:52:41.768972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.551 [2024-10-01 13:52:41.769122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.551 [2024-10-01 13:52:41.769154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.551 [2024-10-01 13:52:41.769173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.551 [2024-10-01 13:52:41.769239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.551 [2024-10-01 13:52:41.769272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.551 [2024-10-01 13:52:41.769290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.551 [2024-10-01 13:52:41.769305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.551 [2024-10-01 13:52:41.769336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.774003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.774130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.774162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.552 [2024-10-01 13:52:41.774181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.774214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.774246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.774263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.774277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.774308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.779080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.779203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.779234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.552 [2024-10-01 13:52:41.779252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.779285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.779317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.779334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.779350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.779381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.784194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.784321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.784353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.552 [2024-10-01 13:52:41.784371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.784405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.784436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.784454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.784498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.784532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.789175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.789302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.789335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.552 [2024-10-01 13:52:41.789353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.789386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.789418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.789435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.789449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.789481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.794298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.794439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.794472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.552 [2024-10-01 13:52:41.794490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.794524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.794571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.794591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.794606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.794637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.799511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.799670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.799704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.552 [2024-10-01 13:52:41.799722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.799757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.799790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.799807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.799822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.799854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.804614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.804755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.804819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.552 [2024-10-01 13:52:41.804857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.804893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.804946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.804967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.804982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.805013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.809629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.809749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.809781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.552 [2024-10-01 13:52:41.809799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.809832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.809863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.809880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.809895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.809943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.814860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.814997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.815030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.552 [2024-10-01 13:52:41.815048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.815081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.815113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.815130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.815145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.815175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.819780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.819908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.819954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.552 [2024-10-01 13:52:41.819972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.552 [2024-10-01 13:52:41.820011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.552 [2024-10-01 13:52:41.820073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.552 [2024-10-01 13:52:41.820092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.552 [2024-10-01 13:52:41.820107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.552 [2024-10-01 13:52:41.820138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.552 [2024-10-01 13:52:41.824971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.552 [2024-10-01 13:52:41.825098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.552 [2024-10-01 13:52:41.825130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.553 [2024-10-01 13:52:41.825148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.825181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.825212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.825230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.825245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.825277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.829993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.830119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.830152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.553 [2024-10-01 13:52:41.830170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.830204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.830236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.830253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.830268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.830300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.835070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.835197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.835230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.553 [2024-10-01 13:52:41.835248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.835281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.835312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.835329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.835344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.835404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.840091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.840212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.840245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.553 [2024-10-01 13:52:41.840263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.840297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.840329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.840346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.840361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.840391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.845219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.845364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.845396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.553 [2024-10-01 13:52:41.845414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.845447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.845478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.845496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.845511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.845542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.850192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.850311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.850343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.553 [2024-10-01 13:52:41.850361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.850394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.850428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.850445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.850460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.850490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.855335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.855452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.855484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.553 [2024-10-01 13:52:41.855535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.855570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.855602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.855620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.855634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.855665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.860474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.860598] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.860631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.553 [2024-10-01 13:52:41.860649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.860682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.860723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.860740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.860755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.860786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.865500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.865638] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.865671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.553 [2024-10-01 13:52:41.865689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.865722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.865759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.865777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.865791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.865822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 8161.23 IOPS, 31.88 MiB/s [2024-10-01 13:52:41.870963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.871081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.871113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.553 [2024-10-01 13:52:41.871131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.871164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.871196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.553 [2024-10-01 13:52:41.871241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.553 [2024-10-01 13:52:41.871257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.553 [2024-10-01 13:52:41.871289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.553 [2024-10-01 13:52:41.875887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.553 [2024-10-01 13:52:41.876040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.553 [2024-10-01 13:52:41.876083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.553 [2024-10-01 13:52:41.876101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.553 [2024-10-01 13:52:41.876134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.553 [2024-10-01 13:52:41.876165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.876182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.876197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.876227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.881060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.881195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.881228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.554 [2024-10-01 13:52:41.881246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.881289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.881320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.881337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.881352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.881383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.886002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.886126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.886159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.554 [2024-10-01 13:52:41.886177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.886211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.886242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.886264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.886279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.886312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.891242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.891391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.891424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.554 [2024-10-01 13:52:41.891442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.891476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.891508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.891525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.891548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.891580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.896250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.896377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.896410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.554 [2024-10-01 13:52:41.896429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.896463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.896493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.896511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.896525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.896557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.901353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.901472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.901504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.554 [2024-10-01 13:52:41.901522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.901556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.901587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.901603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.901618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.901649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.906450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.906574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.906607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.554 [2024-10-01 13:52:41.906626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.906693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.906726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.906743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.906761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.906792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.911448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.911592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.911623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.554 [2024-10-01 13:52:41.911649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.911681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.911735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.911752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.911771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.911801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.916539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.916663] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.916694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.554 [2024-10-01 13:52:41.916713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.916746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.916778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.916796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.916811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.916842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.921733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.921852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.921885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.554 [2024-10-01 13:52:41.921903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.921955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.921987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.922013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.922068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.554 [2024-10-01 13:52:41.922121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.554 [2024-10-01 13:52:41.926672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.554 [2024-10-01 13:52:41.926791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.554 [2024-10-01 13:52:41.926823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.554 [2024-10-01 13:52:41.926841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.554 [2024-10-01 13:52:41.926874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.554 [2024-10-01 13:52:41.926906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.554 [2024-10-01 13:52:41.926941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.554 [2024-10-01 13:52:41.926957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.926988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.931829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.931963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.931995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.555 [2024-10-01 13:52:41.932013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.932048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.932079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.932096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.932122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.932153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.936979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.937099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.937131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.555 [2024-10-01 13:52:41.937149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.937182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.937222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.937239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.937254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.937284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.941935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.942085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.942117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.555 [2024-10-01 13:52:41.942135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.942169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.942200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.942217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.942231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.942263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.947073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.947204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.947241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.555 [2024-10-01 13:52:41.947259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.947292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.947324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.947341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.947356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.947386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.952105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.952231] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.952264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.555 [2024-10-01 13:52:41.952283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.952332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.952367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.952385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.952400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.952431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.957174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.957296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.957329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.555 [2024-10-01 13:52:41.957346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.957380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.957443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.957463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.957477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.957508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.962202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.962325] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.962357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.555 [2024-10-01 13:52:41.962375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.962409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.962440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.962457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.962472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.962502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.967273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.967405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.967438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.555 [2024-10-01 13:52:41.967456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.967488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.967520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.967537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.967551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.967582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.972301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.972428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.972460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.555 [2024-10-01 13:52:41.972478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.555 [2024-10-01 13:52:41.972511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.555 [2024-10-01 13:52:41.972543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.555 [2024-10-01 13:52:41.972561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.555 [2024-10-01 13:52:41.972575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.555 [2024-10-01 13:52:41.972634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.555 [2024-10-01 13:52:41.977371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.555 [2024-10-01 13:52:41.977494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.555 [2024-10-01 13:52:41.977526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.556 [2024-10-01 13:52:41.977544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:41.977577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:41.977609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:41.977626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:41.977641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:41.977671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:41.982403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:41.982521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:41.982566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.556 [2024-10-01 13:52:41.982586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:41.982646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:41.982682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:41.982700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:41.982714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:41.982745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:41.987469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:41.987587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:41.987618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.556 [2024-10-01 13:52:41.987637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:41.987669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:41.987700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:41.987718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:41.987732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:41.987762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:41.992498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:41.992612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:41.992644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.556 [2024-10-01 13:52:41.992690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:41.992725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:41.992757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:41.992774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:41.992788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:41.992819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:41.997564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:41.997678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:41.997709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.556 [2024-10-01 13:52:41.997727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:41.997760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:41.997790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:41.997808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:41.997822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:41.997852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:42.002589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:42.002716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:42.002747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.556 [2024-10-01 13:52:42.002765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:42.002797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:42.002828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:42.002845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:42.002859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:42.002889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:42.007652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:42.007777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:42.007809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.556 [2024-10-01 13:52:42.007826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:42.007860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:42.007891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:42.007948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:42.007966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:42.007999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:42.012709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:42.012827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:42.012858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.556 [2024-10-01 13:52:42.012877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:42.012949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:42.012997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:42.013015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:42.013029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:42.013061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:42.017760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:42.017876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:42.017908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.556 [2024-10-01 13:52:42.017942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:42.017976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:42.018007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:42.018025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:42.018040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:42.018070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:42.022803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:42.022936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:42.022969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.556 [2024-10-01 13:52:42.022993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:42.023027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:42.023059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.556 [2024-10-01 13:52:42.023076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.556 [2024-10-01 13:52:42.023091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.556 [2024-10-01 13:52:42.023122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.556 [2024-10-01 13:52:42.028159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.556 [2024-10-01 13:52:42.028297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.556 [2024-10-01 13:52:42.028329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.556 [2024-10-01 13:52:42.028347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.556 [2024-10-01 13:52:42.028379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.556 [2024-10-01 13:52:42.028411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.028428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.028443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.028475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.033132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.033260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.033299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.557 [2024-10-01 13:52:42.033319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.033353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.033385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.033402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.033417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.033449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.038255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.038378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.038411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.557 [2024-10-01 13:52:42.038429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.038462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.038494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.038512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.038527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.038572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.043369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.043488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.043532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.557 [2024-10-01 13:52:42.043554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.043621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.043654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.043671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.043686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.043717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.048363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.048492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.048530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.557 [2024-10-01 13:52:42.048550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.048583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.048614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.048631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.048646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.048676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.053461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.053578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.053617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.557 [2024-10-01 13:52:42.053635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.053668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.053699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.053717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.053732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.053762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.058660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.058781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.058813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.557 [2024-10-01 13:52:42.058831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.058864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.058895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.058928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.058974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.059035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.063577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.063694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.063726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.557 [2024-10-01 13:52:42.063744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.063777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.063809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.063825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.063840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.063870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.068753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.068869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.068900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.557 [2024-10-01 13:52:42.068936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.068973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.069005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.069022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.069037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.069067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.073754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.073882] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.073929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.557 [2024-10-01 13:52:42.073950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.073984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.074016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.074034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.074048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.074079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.078844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.557 [2024-10-01 13:52:42.078997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.557 [2024-10-01 13:52:42.079030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.557 [2024-10-01 13:52:42.079048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.557 [2024-10-01 13:52:42.079080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.557 [2024-10-01 13:52:42.079111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.557 [2024-10-01 13:52:42.079128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.557 [2024-10-01 13:52:42.079143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.557 [2024-10-01 13:52:42.079174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.557 [2024-10-01 13:52:42.083862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.083993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.084030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.558 [2024-10-01 13:52:42.084048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.084081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.084111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.084128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.084142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.084173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.088968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.089083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.089115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.558 [2024-10-01 13:52:42.089133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.089166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.089196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.089213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.089228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.089258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.093969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.094085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.094117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.558 [2024-10-01 13:52:42.094134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.094185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.095423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.095463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.095483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.096256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.099061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.099185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.099217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.558 [2024-10-01 13:52:42.099234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.099810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.100023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.100060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.100079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.100187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.104064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.104184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.104216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.558 [2024-10-01 13:52:42.104234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.104267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.104298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.104315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.104339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.105552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.110348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.111227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.111282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.558 [2024-10-01 13:52:42.111303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.111616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.111713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.111740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.111755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.111818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.114160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.114270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.114300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.558 [2024-10-01 13:52:42.114318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.114927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.115128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.115158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.115174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.115281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.121689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.122501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.122556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.558 [2024-10-01 13:52:42.122578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.122675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.122713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.122731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.122745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.122781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.124249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.125550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.125595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.558 [2024-10-01 13:52:42.125616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.126385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.126744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.126783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.126801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.126873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.132938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.133050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.133098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.558 [2024-10-01 13:52:42.133119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.558 [2024-10-01 13:52:42.133693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.558 [2024-10-01 13:52:42.133879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.558 [2024-10-01 13:52:42.133932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.558 [2024-10-01 13:52:42.133950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.558 [2024-10-01 13:52:42.134057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.558 [2024-10-01 13:52:42.134337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.558 [2024-10-01 13:52:42.134438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.558 [2024-10-01 13:52:42.134470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.559 [2024-10-01 13:52:42.134488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.135711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.135978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.136016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.136043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.136953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.143468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.143582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.143613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.559 [2024-10-01 13:52:42.143631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.143664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.143695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.143712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.143726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.143756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.145140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.145259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.145290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.559 [2024-10-01 13:52:42.145308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.145341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.145390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.145410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.145424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.145455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.153557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.153672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.153703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.559 [2024-10-01 13:52:42.153721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.153754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.153785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.153802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.153816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.153846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.156267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.156381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.156412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.559 [2024-10-01 13:52:42.156437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.156470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.156500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.156517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.156531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.156562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.163651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.163777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.163809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.559 [2024-10-01 13:52:42.163827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.165074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.165303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.165340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.165359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.166268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.167163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.167276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.167308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.559 [2024-10-01 13:52:42.167325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.167358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.167388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.167405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.167420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.167450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.174195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.174319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.174350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.559 [2024-10-01 13:52:42.174368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.174409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.174440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.174457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.174471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.174519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.177707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.177828] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.177859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.559 [2024-10-01 13:52:42.177877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.177909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.177960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.177977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.177991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.178022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.185243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.185361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.185392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.559 [2024-10-01 13:52:42.185427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.185463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.185494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.559 [2024-10-01 13:52:42.185511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.559 [2024-10-01 13:52:42.185525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.559 [2024-10-01 13:52:42.185555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.559 [2024-10-01 13:52:42.187798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.559 [2024-10-01 13:52:42.188465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.559 [2024-10-01 13:52:42.188509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.559 [2024-10-01 13:52:42.188530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.559 [2024-10-01 13:52:42.188706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.559 [2024-10-01 13:52:42.188821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.188842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.188856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.188895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.195745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.195863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.195894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.560 [2024-10-01 13:52:42.195927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.195965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.195995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.196013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.196027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.196057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.199612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.199766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.199799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.560 [2024-10-01 13:52:42.199817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.199850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.199898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.199938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.199977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.200012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.205848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.206557] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.206606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.560 [2024-10-01 13:52:42.206628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.206798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.206930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.206953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.206968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.207008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.210359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.210474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.210504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.560 [2024-10-01 13:52:42.210522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.210568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.210601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.210618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.210633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.210663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.216030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.216145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.216176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.560 [2024-10-01 13:52:42.216194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.216226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.216257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.216274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.216289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.216319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.220449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.221161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.221211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.560 [2024-10-01 13:52:42.221233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.221400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.221527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.221560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.221578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.221620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.226130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.226244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.226275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.560 [2024-10-01 13:52:42.226293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.226326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.226356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.226374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.226388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.226417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.230783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.230897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.230943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.560 [2024-10-01 13:52:42.230962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.230995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.231025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.231043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.231057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.231087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.560 [2024-10-01 13:52:42.236220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.560 [2024-10-01 13:52:42.236337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.560 [2024-10-01 13:52:42.236369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.560 [2024-10-01 13:52:42.236387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.560 [2024-10-01 13:52:42.236453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.560 [2024-10-01 13:52:42.236490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.560 [2024-10-01 13:52:42.236508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.560 [2024-10-01 13:52:42.236522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.560 [2024-10-01 13:52:42.236553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.240974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.241089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.241121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.561 [2024-10-01 13:52:42.241138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.241171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.241202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.241219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.241233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.241263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.246315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.246427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.246459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.561 [2024-10-01 13:52:42.246477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.246510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.246555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.246575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.246589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.247180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.251069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.251183] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.251214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.561 [2024-10-01 13:52:42.251232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.251264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.251295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.251312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.251326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.251369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.258638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.258790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.258823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.561 [2024-10-01 13:52:42.258841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.258874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.258905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.258938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.258954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.258986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.261160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.261268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.261299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.561 [2024-10-01 13:52:42.261316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.261349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.261945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.261983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.262002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.262193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.269459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.269576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.269607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.561 [2024-10-01 13:52:42.269625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.269658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.269689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.269707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.269721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.269751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.273449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.273604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.273658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.561 [2024-10-01 13:52:42.273678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.273713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.273744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.273761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.273775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.273807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.279721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.279838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.279869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.561 [2024-10-01 13:52:42.279887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.280483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.280672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.280709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.280727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.280835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.284322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.284438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.284469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.561 [2024-10-01 13:52:42.284487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.284520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.284551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.284568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.284582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.284614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.290053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.290170] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.290201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.561 [2024-10-01 13:52:42.290219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.561 [2024-10-01 13:52:42.290252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.561 [2024-10-01 13:52:42.290303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.561 [2024-10-01 13:52:42.290323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.561 [2024-10-01 13:52:42.290338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.561 [2024-10-01 13:52:42.290369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.561 [2024-10-01 13:52:42.294417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.561 [2024-10-01 13:52:42.294530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.561 [2024-10-01 13:52:42.294574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.562 [2024-10-01 13:52:42.294593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.295180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.295366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.295402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.295420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.295529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.300142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.300255] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.300287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.562 [2024-10-01 13:52:42.300305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.300337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.300367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.300384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.300399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.300429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.304616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.304730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.304761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.562 [2024-10-01 13:52:42.304779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.304811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.304842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.304859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.304873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.304904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.310230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.310344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.310376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.562 [2024-10-01 13:52:42.310394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.310427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.311648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.311688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.311708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.311933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.314705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.314818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.314849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.562 [2024-10-01 13:52:42.314867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.314899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.314948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.314966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.314980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.315010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.320821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.320960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.320993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.562 [2024-10-01 13:52:42.321011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.321044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.321092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.321114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.321129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.321160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.324797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.324925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.324957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.562 [2024-10-01 13:52:42.324992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.326207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.326457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.326485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.326501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.327411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.331869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.331996] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.332029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.562 [2024-10-01 13:52:42.332046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.332079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.332110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.332127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.332142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.332172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.335306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.335427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.335458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.562 [2024-10-01 13:52:42.335476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.335508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.335539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.335555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.335570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.335600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.342446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.342571] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.342604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.562 [2024-10-01 13:52:42.342622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.342656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.342686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.562 [2024-10-01 13:52:42.342703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.562 [2024-10-01 13:52:42.342735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.562 [2024-10-01 13:52:42.342768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.562 [2024-10-01 13:52:42.346331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.562 [2024-10-01 13:52:42.346447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.562 [2024-10-01 13:52:42.346478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.562 [2024-10-01 13:52:42.346496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.562 [2024-10-01 13:52:42.346529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.562 [2024-10-01 13:52:42.346576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.346605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.346619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.346650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.353227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.353422] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.353453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.563 [2024-10-01 13:52:42.353471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.353512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.353546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.353564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.353578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.353620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.356981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.357095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.357126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.563 [2024-10-01 13:52:42.357144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.357176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.357218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.357235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.357250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.357281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.363319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.363456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.363488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.563 [2024-10-01 13:52:42.363505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.364101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.364300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.364338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.364356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.364466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.367613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.367858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.367892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.563 [2024-10-01 13:52:42.367937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.368052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.368095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.368113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.368128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.368159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.375420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.375575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.375608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.563 [2024-10-01 13:52:42.375626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.375659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.375690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.375708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.375722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.375753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.377708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.377817] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.377848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.563 [2024-10-01 13:52:42.377866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.377936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.377972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.377990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.378004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.378035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.386041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.386164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.386195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.563 [2024-10-01 13:52:42.386213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.386246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.386277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.386295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.386309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.386340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.389853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.390017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.390050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.563 [2024-10-01 13:52:42.390068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.390102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.390134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.390151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.390165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.390196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.396809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.397021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.397054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.563 [2024-10-01 13:52:42.397073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.397114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.397148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.397166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.397204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.397239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.400596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.563 [2024-10-01 13:52:42.400716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.563 [2024-10-01 13:52:42.400747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.563 [2024-10-01 13:52:42.400764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.563 [2024-10-01 13:52:42.400796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.563 [2024-10-01 13:52:42.400827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.563 [2024-10-01 13:52:42.400844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.563 [2024-10-01 13:52:42.400858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.563 [2024-10-01 13:52:42.400889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.563 [2024-10-01 13:52:42.406903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.407030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.407060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.564 [2024-10-01 13:52:42.407078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.407651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.407838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.407875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.407893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.408019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.411436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.411558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.411589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.564 [2024-10-01 13:52:42.411606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.411639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.411670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.411687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.411701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.411731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.418983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.419136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.419192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.564 [2024-10-01 13:52:42.419213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.419247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.419279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.419297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.419311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.419342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.421531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.422195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.422239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.564 [2024-10-01 13:52:42.422259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.422436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.422567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.422599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.422617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.422667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.429551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.429668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.429700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.564 [2024-10-01 13:52:42.429717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.429750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.429781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.429797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.429811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.429841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.433371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.433525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.433557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.564 [2024-10-01 13:52:42.433575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.433608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.433662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.433681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.433695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.433726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.440373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.440499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.440531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.564 [2024-10-01 13:52:42.440549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.440582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.440613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.440630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.440645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.440675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.444001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.444115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.444146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.564 [2024-10-01 13:52:42.444164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.444197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.444228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.444245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.444259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.444290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.451155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.451349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.451380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.564 [2024-10-01 13:52:42.451399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.451439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.564 [2024-10-01 13:52:42.451474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.564 [2024-10-01 13:52:42.451491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.564 [2024-10-01 13:52:42.451505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.564 [2024-10-01 13:52:42.451536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.564 [2024-10-01 13:52:42.454755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.564 [2024-10-01 13:52:42.454877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.564 [2024-10-01 13:52:42.454909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.564 [2024-10-01 13:52:42.454943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.564 [2024-10-01 13:52:42.454977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.455008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.455025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.455042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.455073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.462256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.462370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.462401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.565 [2024-10-01 13:52:42.462419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.462451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.462481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.462498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.462513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.462555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.465588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.465707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.465737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.565 [2024-10-01 13:52:42.465754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.465786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.465816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.465833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.465847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.465878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.472687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.472801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.472833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.565 [2024-10-01 13:52:42.472868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.472903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.472952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.472971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.472985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.473016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.476530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.476645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.476676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.565 [2024-10-01 13:52:42.476694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.476726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.476757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.476773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.476788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.476819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.483435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.483558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.483589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.565 [2024-10-01 13:52:42.483608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.483641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.483671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.483688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.483702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.483733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.487060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.487173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.487204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.565 [2024-10-01 13:52:42.487221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.487254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.487285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.487317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.487332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.487364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.494199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.494392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.494424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.565 [2024-10-01 13:52:42.494442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.494482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.494516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.494533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.494564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.494595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.497851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.497988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.498019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.565 [2024-10-01 13:52:42.498037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.498069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.498100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.498117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.498132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.498162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.505330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.505445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.505477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.565 [2024-10-01 13:52:42.505494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.505539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.505569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.505586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.505600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.565 [2024-10-01 13:52:42.505630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.565 [2024-10-01 13:52:42.508654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.565 [2024-10-01 13:52:42.508861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.565 [2024-10-01 13:52:42.508894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.565 [2024-10-01 13:52:42.508925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.565 [2024-10-01 13:52:42.508970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.565 [2024-10-01 13:52:42.509004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.565 [2024-10-01 13:52:42.509022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.565 [2024-10-01 13:52:42.509035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.509066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.515872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.516001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.516033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.566 [2024-10-01 13:52:42.516051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.516084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.516115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.516132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.516146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.516176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.519685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.519836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.519868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.566 [2024-10-01 13:52:42.519886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.519934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.519970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.519986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.520001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.520032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.526719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.526840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.526871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.566 [2024-10-01 13:52:42.526889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.526965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.526998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.527015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.527031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.527062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.530379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.530502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.530533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.566 [2024-10-01 13:52:42.530568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.530601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.530632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.530648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.530662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.530692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.537552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.537675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.537707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.566 [2024-10-01 13:52:42.537724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.537756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.537787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.537805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.537819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.537848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.541083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.541204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.541236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.566 [2024-10-01 13:52:42.541253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.541297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.541327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.541344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.541374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.541410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.548545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.548661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.548693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.566 [2024-10-01 13:52:42.548710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.548743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.548774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.548792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.548806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.548836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.551950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.552070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.552101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.566 [2024-10-01 13:52:42.552119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.552151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.552181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.552198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.552212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.552243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.559078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.559192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.559223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.566 [2024-10-01 13:52:42.559241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.559273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.559307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.559325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.559339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.559368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.562876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.563002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.563050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.566 [2024-10-01 13:52:42.563071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.566 [2024-10-01 13:52:42.563105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.566 [2024-10-01 13:52:42.563136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.566 [2024-10-01 13:52:42.563153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.566 [2024-10-01 13:52:42.563167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.566 [2024-10-01 13:52:42.563199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.566 [2024-10-01 13:52:42.569860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.566 [2024-10-01 13:52:42.569998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.566 [2024-10-01 13:52:42.570030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.567 [2024-10-01 13:52:42.570048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.570080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.570110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.570128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.570142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.570177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.573491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.573603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.573635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.567 [2024-10-01 13:52:42.573653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.573685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.573716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.573732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.573746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.573776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.580518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.580771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.580819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.567 [2024-10-01 13:52:42.580840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.580973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.581037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.581058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.581073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.581104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.584246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.584373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.584405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.567 [2024-10-01 13:52:42.584422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.584455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.584485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.584502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.584516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.584547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.591744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.591921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.591955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.567 [2024-10-01 13:52:42.591974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.592008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.592039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.592055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.592070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.592101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.594351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.595023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.595066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.567 [2024-10-01 13:52:42.595087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.595272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.595388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.595409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.595423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.595462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.602408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.602523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.602568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.567 [2024-10-01 13:52:42.602588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.602621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.602652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.602669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.602683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.602713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.606187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.606338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.606370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.567 [2024-10-01 13:52:42.606388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.606421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.606452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.606469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.606483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.606514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.613242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.613365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.613396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.567 [2024-10-01 13:52:42.613415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.613448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.613479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.613496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.613526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.613557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.616896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.617033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.617065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.567 [2024-10-01 13:52:42.617106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.617141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.617172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.567 [2024-10-01 13:52:42.617190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.567 [2024-10-01 13:52:42.617204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.567 [2024-10-01 13:52:42.617235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.567 [2024-10-01 13:52:42.623340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.567 [2024-10-01 13:52:42.624018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.567 [2024-10-01 13:52:42.624064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.567 [2024-10-01 13:52:42.624086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.567 [2024-10-01 13:52:42.624254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.567 [2024-10-01 13:52:42.624369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.624391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.624405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.624445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.627649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.627771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.627802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.568 [2024-10-01 13:52:42.627820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.627853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.627884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.627901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.627930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.627964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.635110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.635264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.635297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.568 [2024-10-01 13:52:42.635316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.635350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.635381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.635417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.635433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.635471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.638520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.638651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.638682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.568 [2024-10-01 13:52:42.638699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.638732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.638768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.638785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.638800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.638830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.645718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.645835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.645866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.568 [2024-10-01 13:52:42.645884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.645934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.645969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.645987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.646001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.646031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.649536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.649655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.649687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.568 [2024-10-01 13:52:42.649704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.649737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.649769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.649786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.649800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.649830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.656508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.656649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.656681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.568 [2024-10-01 13:52:42.656699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.656739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.656770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.656786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.656801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.656831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.660167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.660282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.660314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.568 [2024-10-01 13:52:42.660332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.660364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.660395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.660412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.660426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.660457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.666620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.667299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.667344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.568 [2024-10-01 13:52:42.667365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.667525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.667640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.667661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.667675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.667713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.671005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.568 [2024-10-01 13:52:42.671127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.568 [2024-10-01 13:52:42.671159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.568 [2024-10-01 13:52:42.671176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.568 [2024-10-01 13:52:42.671229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.568 [2024-10-01 13:52:42.671261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.568 [2024-10-01 13:52:42.671279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.568 [2024-10-01 13:52:42.671292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.568 [2024-10-01 13:52:42.671323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.568 [2024-10-01 13:52:42.678438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.678609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.678643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.569 [2024-10-01 13:52:42.678661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.678695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.678726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.678744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.678758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.678789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.681820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.681954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.681987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.569 [2024-10-01 13:52:42.682005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.682038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.682069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.682086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.682105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.682136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.688944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.689058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.689089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.569 [2024-10-01 13:52:42.689107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.689139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.689170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.689187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.689207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.689247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.692799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.692928] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.692960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.569 [2024-10-01 13:52:42.692978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.693011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.693042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.693059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.693073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.693104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.699659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.699793] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.699825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.569 [2024-10-01 13:52:42.699843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.699876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.699907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.699943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.699958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.699989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.703349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.703463] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.703494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.569 [2024-10-01 13:52:42.703511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.703544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.703574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.703591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.703605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.703635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.710490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.710620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.710671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.569 [2024-10-01 13:52:42.710691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.710725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.710757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.710774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.710788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.710818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.714040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.714173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.714205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.569 [2024-10-01 13:52:42.714222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.714255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.714285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.714302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.714316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.714346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.721511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.721625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.721657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.569 [2024-10-01 13:52:42.721674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.721707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.721738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.721755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.721769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.721800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.724142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.724800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.724843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.569 [2024-10-01 13:52:42.724863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.569 [2024-10-01 13:52:42.725052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.569 [2024-10-01 13:52:42.725193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.569 [2024-10-01 13:52:42.725222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.569 [2024-10-01 13:52:42.725236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.569 [2024-10-01 13:52:42.725275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.569 [2024-10-01 13:52:42.732190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.569 [2024-10-01 13:52:42.732314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.569 [2024-10-01 13:52:42.732347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.569 [2024-10-01 13:52:42.732365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.732397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.732428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.732446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.732460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.732499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.736050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.736195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.736228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.570 [2024-10-01 13:52:42.736245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.736279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.736309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.736335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.736349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.736380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.743065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.743186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.743218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.570 [2024-10-01 13:52:42.743236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.743268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.743308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.743328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.743342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.743391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.746811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.746949] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.746983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.570 [2024-10-01 13:52:42.747001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.747034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.747066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.747082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.747096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.747128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.753157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.753268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.753299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.570 [2024-10-01 13:52:42.753316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.753348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.753956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.753994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.754013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.754175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.757491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.757733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.757776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.570 [2024-10-01 13:52:42.757797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.757906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.757968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.757988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.758003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.758045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.765209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.765369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.765411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.570 [2024-10-01 13:52:42.765450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.765486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.765518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.765535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.765550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.765581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.767581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.767691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.767722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.570 [2024-10-01 13:52:42.767740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.767772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.767803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.767819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.767833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.767864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.775982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.776100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.776132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.570 [2024-10-01 13:52:42.776151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.776184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.776216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.776234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.776248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.776278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.779629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.779990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.780034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.570 [2024-10-01 13:52:42.780055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.780125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.780164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.780207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.780223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.570 [2024-10-01 13:52:42.780256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.570 [2024-10-01 13:52:42.786157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.570 [2024-10-01 13:52:42.786851] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.570 [2024-10-01 13:52:42.786898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.570 [2024-10-01 13:52:42.786936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.570 [2024-10-01 13:52:42.787105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.570 [2024-10-01 13:52:42.787233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.570 [2024-10-01 13:52:42.787264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.570 [2024-10-01 13:52:42.787281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.787322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.790733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.790849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.790880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.571 [2024-10-01 13:52:42.790899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.790949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.790983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.791001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.791015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.791047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.796490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.796609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.796641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.571 [2024-10-01 13:52:42.796659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.796691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.796722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.796739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.796753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.796784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.800962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.801689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.801733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.571 [2024-10-01 13:52:42.801755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.801959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.802080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.802101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.802116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.802156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.806716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.806850] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.806882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.571 [2024-10-01 13:52:42.806900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.806951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.806985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.807003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.807018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.807049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.811459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.811587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.811625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.571 [2024-10-01 13:52:42.811643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.811678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.811710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.811727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.811742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.811773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.816820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.816955] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.816989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.571 [2024-10-01 13:52:42.817007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.817068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.817130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.817152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.817167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.817198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.821563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.821690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.821722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.571 [2024-10-01 13:52:42.821740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.821773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.821804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.821821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.821836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.821867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.826928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.827049] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.827080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.571 [2024-10-01 13:52:42.827098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.827684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.827867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.827903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.827936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.828046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.831655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.831769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.831801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.571 [2024-10-01 13:52:42.831820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.831853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.831885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.831902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.831958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.831994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.839161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.839321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.571 [2024-10-01 13:52:42.839365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.571 [2024-10-01 13:52:42.839386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.571 [2024-10-01 13:52:42.839420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.571 [2024-10-01 13:52:42.839452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.571 [2024-10-01 13:52:42.839469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.571 [2024-10-01 13:52:42.839484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.571 [2024-10-01 13:52:42.839516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.571 [2024-10-01 13:52:42.841747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.571 [2024-10-01 13:52:42.841859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.841896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.572 [2024-10-01 13:52:42.841932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.842522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.842724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.842767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.842785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.842895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.850016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.850146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.850178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.572 [2024-10-01 13:52:42.850197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.850230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.850261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.850279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.850293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.850323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.853962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.854113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.854184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.572 [2024-10-01 13:52:42.854207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.854243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.854275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.854292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.854306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.854337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.860229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.860890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.860949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.572 [2024-10-01 13:52:42.860971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.861144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.861270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.861315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.861333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.861373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.864787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.864902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.864954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.572 [2024-10-01 13:52:42.864974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.865008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.865039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.865056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.865070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.865100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 8200.00 IOPS, 32.03 MiB/s [2024-10-01 13:52:42.871158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.871275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.871307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.572 [2024-10-01 13:52:42.871325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.871358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.871994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.872031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.872050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.872213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.874960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.875622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.875667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.572 [2024-10-01 13:52:42.875688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.875859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.875997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.876029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.876046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.876088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.883363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.883529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.883572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.572 [2024-10-01 13:52:42.883594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.883645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.883681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.883699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.883714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.883745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.885219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.885331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.885361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.572 [2024-10-01 13:52:42.885379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.885411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.885443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.885470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.885484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.885543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.894167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.894307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.894340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.572 [2024-10-01 13:52:42.894359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.894393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.572 [2024-10-01 13:52:42.894424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.572 [2024-10-01 13:52:42.894442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.572 [2024-10-01 13:52:42.894457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.572 [2024-10-01 13:52:42.894487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.572 [2024-10-01 13:52:42.895304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.572 [2024-10-01 13:52:42.895414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.572 [2024-10-01 13:52:42.895446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.572 [2024-10-01 13:52:42.895465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.572 [2024-10-01 13:52:42.895498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.895530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.895547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.895561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.895591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.904349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.904481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.904514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.573 [2024-10-01 13:52:42.904532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.905130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.905319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.905356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.905375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.905489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.905570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.905678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.905715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.573 [2024-10-01 13:52:42.905780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.905816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.907097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.907137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.907156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.907389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.914657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.914782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.914825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.573 [2024-10-01 13:52:42.914844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.914878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.914924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.914946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.914961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.914993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.916332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.916462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.916493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.573 [2024-10-01 13:52:42.916511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.916544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.916575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.916592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.916607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.916637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.924757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.924893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.924940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.573 [2024-10-01 13:52:42.924960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.924999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.925031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.925072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.925088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.925120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.927552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.927735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.927778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.573 [2024-10-01 13:52:42.927798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.927833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.927865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.927882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.927896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.927943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.934861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.935013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.935046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.573 [2024-10-01 13:52:42.935065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.935098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.935130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.935148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.935163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.936400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.938375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.938489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.938527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.573 [2024-10-01 13:52:42.938558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.938593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.938624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.938642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.938656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.938687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.573 [2024-10-01 13:52:42.945676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.573 [2024-10-01 13:52:42.945809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.573 [2024-10-01 13:52:42.945843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.573 [2024-10-01 13:52:42.945862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.573 [2024-10-01 13:52:42.945896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.573 [2024-10-01 13:52:42.945948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.573 [2024-10-01 13:52:42.945969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.573 [2024-10-01 13:52:42.945984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.573 [2024-10-01 13:52:42.946014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.948465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.949149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.949194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.574 [2024-10-01 13:52:42.949215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.949392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.949520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.949550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.949568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.949629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.956686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.957057] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.957102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.574 [2024-10-01 13:52:42.957124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.957196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.957235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.957254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.957269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.957301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.958816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.958944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.958976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.574 [2024-10-01 13:52:42.958995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.959055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.959087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.959105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.959119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.959151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.967641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.967756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.967787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.574 [2024-10-01 13:52:42.967805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.967838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.967869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.967886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.967901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.967946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.968903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.969025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.969062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.574 [2024-10-01 13:52:42.969081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.969114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.969145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.969162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.969176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.969206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.981374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.981501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.981802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.981865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.574 [2024-10-01 13:52:42.981907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.982005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.982036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.574 [2024-10-01 13:52:42.982092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.983765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.983822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.985690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.985742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.985767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.985796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.985815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.985833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.986883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.986955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.993885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.994032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:42.995494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.995570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.574 [2024-10-01 13:52:42.995601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.995677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:42.995708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.574 [2024-10-01 13:52:42.995728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:42.996681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.996743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:42.998693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.998745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.998770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.998802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:42.998822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.574 [2024-10-01 13:52:42.998839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.574 [2024-10-01 13:52:42.999199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:42.999242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.574 [2024-10-01 13:52:43.007017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:43.007105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.574 [2024-10-01 13:52:43.007500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:43.007559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.574 [2024-10-01 13:52:43.007587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:43.007656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.574 [2024-10-01 13:52:43.007685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.574 [2024-10-01 13:52:43.007708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.574 [2024-10-01 13:52:43.008066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:43.008117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.574 [2024-10-01 13:52:43.010128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.574 [2024-10-01 13:52:43.010177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.010202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.010226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.010246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.010263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.011268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.011316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.020773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.020865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.022061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.022122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.575 [2024-10-01 13:52:43.022156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.022232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.022263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.575 [2024-10-01 13:52:43.022284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.024434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.024495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.025702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.025752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.025778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.025808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.025883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.025904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.027907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.027983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.034055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.034138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.035449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.035511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.575 [2024-10-01 13:52:43.035539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.035607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.035637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.575 [2024-10-01 13:52:43.035662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.036588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.036643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.036865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.036942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.036969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.036995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.037015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.037034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.037182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.037225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.045163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.045236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.045393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.045434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.575 [2024-10-01 13:52:43.045458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.045523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.045552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.575 [2024-10-01 13:52:43.045581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.046846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.046901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.047219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.047266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.047290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.047315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.047335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.047352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.048933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.048975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.055341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.055444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.055590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.055633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.575 [2024-10-01 13:52:43.055669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.057378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.057431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.575 [2024-10-01 13:52:43.057457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.057484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.057814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.057862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.057885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.057905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.059184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.059237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.059262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.059281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.060557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.066383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.066458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.066626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.066722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.575 [2024-10-01 13:52:43.066750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.066818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.066854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.575 [2024-10-01 13:52:43.066875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.068695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.068749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.575 [2024-10-01 13:52:43.069894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.069969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.069997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.070021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.575 [2024-10-01 13:52:43.070040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.575 [2024-10-01 13:52:43.070058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.575 [2024-10-01 13:52:43.072027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.072073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.575 [2024-10-01 13:52:43.076574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.076669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.575 [2024-10-01 13:52:43.076797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.575 [2024-10-01 13:52:43.076837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.575 [2024-10-01 13:52:43.076862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.575 [2024-10-01 13:52:43.078488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.078553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.576 [2024-10-01 13:52:43.078583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.078611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.079714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.079765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.079788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.079808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.079983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.080017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.080085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.080120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.080875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.088246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.088330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.089675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.089731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.576 [2024-10-01 13:52:43.089758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.089843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.089875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.576 [2024-10-01 13:52:43.089897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.090202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.090244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.091466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.091515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.091540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.091564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.091583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.091601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.091884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.091944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.099625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.099688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.099832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.099872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.576 [2024-10-01 13:52:43.099895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.099990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.100036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.576 [2024-10-01 13:52:43.100058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.101667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.101767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.102147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.102194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.102217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.102241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.102272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.102293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.103544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.103591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.110858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.110953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.111134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.111185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.576 [2024-10-01 13:52:43.111210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.111275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.111307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.576 [2024-10-01 13:52:43.111328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.113154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.113208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.114406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.114454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.114479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.114506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.114525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.114565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.116502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.116550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.121118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.121192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.121362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.121405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.576 [2024-10-01 13:52:43.121485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.121560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.121591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.576 [2024-10-01 13:52:43.121612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.123219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.123274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.124423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.124472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.124497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.124522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.124541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.124558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.125432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.125477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.576 [2024-10-01 13:52:43.133885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.133972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.576 [2024-10-01 13:52:43.134360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.134417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.576 [2024-10-01 13:52:43.134444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.134519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.576 [2024-10-01 13:52:43.134570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.576 [2024-10-01 13:52:43.134593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.576 [2024-10-01 13:52:43.135772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.135835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.576 [2024-10-01 13:52:43.136153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.136198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.136232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.136257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.576 [2024-10-01 13:52:43.136277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.576 [2024-10-01 13:52:43.136294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.576 [2024-10-01 13:52:43.137869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.137929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.144091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.144156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.144309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.144349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.577 [2024-10-01 13:52:43.144372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.144433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.144463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.577 [2024-10-01 13:52:43.144483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.146099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.146154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.146457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.146506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.146530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.146578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.146601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.146618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.147818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.147864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.154274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.155124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.155259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.155305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.577 [2024-10-01 13:52:43.155333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.155596] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.155646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.577 [2024-10-01 13:52:43.155670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.155695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.155869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.155977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.156003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.156021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.157798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.157846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.157869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.157888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.159106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.165692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.165757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.165889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.165947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.577 [2024-10-01 13:52:43.165972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.166039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.166069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.577 [2024-10-01 13:52:43.166094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.166137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.166166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.167722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.167773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.167798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.167821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.167840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.167857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.169019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.169069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.177657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.177728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.179173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.179233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.577 [2024-10-01 13:52:43.179260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.179376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.179411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.577 [2024-10-01 13:52:43.179438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.179689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.179730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.180950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.180999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.181025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.181048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.181067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.181084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.181359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.181392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.189259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.189330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.577 [2024-10-01 13:52:43.189477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.189516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.577 [2024-10-01 13:52:43.189539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.189600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.577 [2024-10-01 13:52:43.189629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.577 [2024-10-01 13:52:43.189658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.577 [2024-10-01 13:52:43.191282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.191343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.577 [2024-10-01 13:52:43.191646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.191696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.191720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.191744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.577 [2024-10-01 13:52:43.191774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.577 [2024-10-01 13:52:43.191792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.577 [2024-10-01 13:52:43.193024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.577 [2024-10-01 13:52:43.193115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.200349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.200416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.200663] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.200723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.578 [2024-10-01 13:52:43.200749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.200814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.200844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.578 [2024-10-01 13:52:43.200864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.202689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.202751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.203977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.204027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.204060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.204085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.204105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.204122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.204360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.204396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.210635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.210734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.210879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.210936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.578 [2024-10-01 13:52:43.210963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.211037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.211070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.578 [2024-10-01 13:52:43.211095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.212640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.212709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.213857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.213971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.213998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.214023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.214042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.214059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.214946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.214993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.222354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.222445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.223900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.223977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.578 [2024-10-01 13:52:43.224006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.224084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.224117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.578 [2024-10-01 13:52:43.224138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.224395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.224449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.225646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.225695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.225720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.225744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.225763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.225780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.227618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.227668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.233688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.233766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.233930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.233971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.578 [2024-10-01 13:52:43.233994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.234067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.234130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.578 [2024-10-01 13:52:43.234153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.235778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.235846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.236194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.236238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.236261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.236284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.236303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.236320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.237537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.237586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.244849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.245109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.245260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.245320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.578 [2024-10-01 13:52:43.245347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.247313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.247380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.578 [2024-10-01 13:52:43.247408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.247442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.248620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.248673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.248697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.248718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.249022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.249070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.249092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.249111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.249273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.257005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.258307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.578 [2024-10-01 13:52:43.258513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.258588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.578 [2024-10-01 13:52:43.258617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.259652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.578 [2024-10-01 13:52:43.259709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.578 [2024-10-01 13:52:43.259735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.578 [2024-10-01 13:52:43.259763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.260032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.578 [2024-10-01 13:52:43.260070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.260090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.578 [2024-10-01 13:52:43.260112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.578 [2024-10-01 13:52:43.260267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.578 [2024-10-01 13:52:43.260297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.578 [2024-10-01 13:52:43.260319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.260336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.262167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.269538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.270024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.270085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.579 [2024-10-01 13:52:43.270114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.270203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.270257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.271900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.271983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.579 [2024-10-01 13:52:43.272009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.272031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.272049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.272070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.273218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.273283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.274216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.274267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.274291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.274554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.281464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.282798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.282975] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.283025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.579 [2024-10-01 13:52:43.283049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.283394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.283459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.579 [2024-10-01 13:52:43.283487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.283513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.284706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.284764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.284788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.284808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.285085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.285129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.285150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.285169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.286724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.292810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.293018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.293069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.579 [2024-10-01 13:52:43.293093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.293155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.293219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.293261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.293327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.293348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.295029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.295142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.295179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.579 [2024-10-01 13:52:43.295202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.295506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.296698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.296748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.296775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.298080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.302954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.303108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.303149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.579 [2024-10-01 13:52:43.303171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.303941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.304209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.304254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.304277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.304436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.304470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.304579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.304616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.579 [2024-10-01 13:52:43.304637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.306419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.307600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.307651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.307680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.307932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.314275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.314459] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.314500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.579 [2024-10-01 13:52:43.314522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.314593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.316159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.316212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.316238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.317395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.317451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.317668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.317708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.579 [2024-10-01 13:52:43.317731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.318505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.318773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.318819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.318841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.319005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.326923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.327380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.327441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.579 [2024-10-01 13:52:43.327469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.328700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.330585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.330637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.330663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.331812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.579 [2024-10-01 13:52:43.331877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.579 [2024-10-01 13:52:43.332902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.579 [2024-10-01 13:52:43.332977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.579 [2024-10-01 13:52:43.333004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.579 [2024-10-01 13:52:43.333246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.579 [2024-10-01 13:52:43.333466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.579 [2024-10-01 13:52:43.333509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.579 [2024-10-01 13:52:43.333531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.579 [2024-10-01 13:52:43.335357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.341149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.342738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.342799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.580 [2024-10-01 13:52:43.342827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.343160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.344738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.344809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.344837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.344858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.346011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.346125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.346163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.580 [2024-10-01 13:52:43.346185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.347089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.347334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.347370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.347391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.347532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.353182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.354092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.354143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.580 [2024-10-01 13:52:43.354166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.354551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.354722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.354755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.354774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.354858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.354945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.355043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.355078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.580 [2024-10-01 13:52:43.355106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.355141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.355172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.355190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.355204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.355235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.364098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.364368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.364416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.580 [2024-10-01 13:52:43.364439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.364489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.364531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.364550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.364567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.364600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.367386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.368290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.368339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.580 [2024-10-01 13:52:43.368361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.368725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.368907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.368957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.368976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.369021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.374227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.374362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.374398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.580 [2024-10-01 13:52:43.374459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.375094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.375304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.375332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.375359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.375476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.378159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.378416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.378467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.580 [2024-10-01 13:52:43.378490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.378627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.378677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.378696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.378713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.378752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.384895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.385062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.385097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.580 [2024-10-01 13:52:43.385116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.385152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.385184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.385208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.385229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.385271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.388276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.388407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.388441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.580 [2024-10-01 13:52:43.388460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.388495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.389118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.389189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.389210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.389413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.397021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.397298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.397359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.580 [2024-10-01 13:52:43.397382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.397429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.397465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.397483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.397499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.397533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.399209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.580 [2024-10-01 13:52:43.399335] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.580 [2024-10-01 13:52:43.399371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.580 [2024-10-01 13:52:43.399398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.580 [2024-10-01 13:52:43.399434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.580 [2024-10-01 13:52:43.399466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.580 [2024-10-01 13:52:43.399483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.580 [2024-10-01 13:52:43.399499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.580 [2024-10-01 13:52:43.399530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.580 [2024-10-01 13:52:43.407145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.407293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.407328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.581 [2024-10-01 13:52:43.407347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.407381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.407414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.407431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.407446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.407485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.411075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.411511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.411558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.581 [2024-10-01 13:52:43.411581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.411734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.411783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.411804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.411819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.411852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.418013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.418159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.418194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.581 [2024-10-01 13:52:43.418213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.418247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.418279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.418297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.418312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.418344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.421230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.421349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.421390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.581 [2024-10-01 13:52:43.421413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.421448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.421479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.421497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.421512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.421543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.428121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.428254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.428287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.581 [2024-10-01 13:52:43.428306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.429555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.430488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.430552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.430578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.430696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.431325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.432033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.432080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.581 [2024-10-01 13:52:43.432102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.432281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.432403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.432426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.432441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.432482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.439485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.439856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.439903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.581 [2024-10-01 13:52:43.439941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.440109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.440159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.440179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.440194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.440226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.441950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.442075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.442109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.581 [2024-10-01 13:52:43.442128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.442162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.442194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.442212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.442265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.442300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.449681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.449869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.449905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.581 [2024-10-01 13:52:43.449943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.449982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.450015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.450033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.450049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.450081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.454048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.454348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.454398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.581 [2024-10-01 13:52:43.454420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.454466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.454517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.454552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.454573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.454608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.460568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.460825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.581 [2024-10-01 13:52:43.460862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.581 [2024-10-01 13:52:43.460891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.581 [2024-10-01 13:52:43.460957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.581 [2024-10-01 13:52:43.460995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.581 [2024-10-01 13:52:43.461013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.581 [2024-10-01 13:52:43.461029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.581 [2024-10-01 13:52:43.461063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.581 [2024-10-01 13:52:43.464186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.581 [2024-10-01 13:52:43.464366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.464402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.582 [2024-10-01 13:52:43.464422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.464458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.464491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.464511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.464536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.464575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.470699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.470880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.470935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.582 [2024-10-01 13:52:43.470959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.472244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.473206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.473262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.473283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.473404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.474882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.475050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.475091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.582 [2024-10-01 13:52:43.475112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.475147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.475179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.475205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.475226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.475260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.482430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.482743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.482785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.582 [2024-10-01 13:52:43.482817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.482871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.482961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.482983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.482999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.483033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.485011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.485145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.485179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.582 [2024-10-01 13:52:43.485198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.486456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.487428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.487477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.487499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.487621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.492564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.492720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.492757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.582 [2024-10-01 13:52:43.492776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.492811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.492843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.492860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.492876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.492908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.496617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.496863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.496938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.582 [2024-10-01 13:52:43.496964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.497009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.497045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.497063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.497079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.497147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.503385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.503543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.503579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.582 [2024-10-01 13:52:43.503598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.503643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.503676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.503696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.503723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.503767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.506723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.506853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.506887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.582 [2024-10-01 13:52:43.506906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.506979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.507016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.507034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.507051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.507093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.513499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.513668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.513705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.582 [2024-10-01 13:52:43.513724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.513760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.513793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.513810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.513826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.515096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.516841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.517561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.517610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.582 [2024-10-01 13:52:43.517662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.517854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.518005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.518030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.518060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.518111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.582 [2024-10-01 13:52:43.525304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.582 [2024-10-01 13:52:43.525580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.582 [2024-10-01 13:52:43.525628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.582 [2024-10-01 13:52:43.525650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.582 [2024-10-01 13:52:43.525696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.582 [2024-10-01 13:52:43.525731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.582 [2024-10-01 13:52:43.525750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.582 [2024-10-01 13:52:43.525767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.582 [2024-10-01 13:52:43.525799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.527524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.527651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.527684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.583 [2024-10-01 13:52:43.527702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.527741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.527779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.527797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.527812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.527844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.535426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.535561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.535595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.583 [2024-10-01 13:52:43.535621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.535671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.535704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.535761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.535779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.535811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.539279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.539656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.539703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.583 [2024-10-01 13:52:43.539725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.539875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.539944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.539966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.539983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.540027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.546195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.546361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.546396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.583 [2024-10-01 13:52:43.546415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.546456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.546492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.546510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.546526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.546574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.549411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.549550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.549585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.583 [2024-10-01 13:52:43.549604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.549640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.549673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.549691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.549707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.549740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.556321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.556484] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.556525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.583 [2024-10-01 13:52:43.556546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.557802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.558818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.558865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.558888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.559036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.560345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.560490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.560525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.583 [2024-10-01 13:52:43.560543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.560589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.560638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.560655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.560671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.560702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.567862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.568181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.568231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.583 [2024-10-01 13:52:43.568267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.568316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.568362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.568380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.568397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.568430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.570447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.570601] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.570637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.583 [2024-10-01 13:52:43.570656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.571951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.572889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.572950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.572980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.573098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.578012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.578167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.578203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.583 [2024-10-01 13:52:43.578223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.578259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.578291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.578309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.578325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.578355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.582079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.582322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.582361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.583 [2024-10-01 13:52:43.582381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.582425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.582465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.582498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.582516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.582566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.588719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.588891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.588949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.583 [2024-10-01 13:52:43.588977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.589016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.589052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.589069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.589116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.589150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.583 [2024-10-01 13:52:43.592184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.583 [2024-10-01 13:52:43.592317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.583 [2024-10-01 13:52:43.592356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.583 [2024-10-01 13:52:43.592377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.583 [2024-10-01 13:52:43.592412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.583 [2024-10-01 13:52:43.592444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.583 [2024-10-01 13:52:43.592462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.583 [2024-10-01 13:52:43.592478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.583 [2024-10-01 13:52:43.592509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.598842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.598999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.599034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.584 [2024-10-01 13:52:43.599054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.599089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.599121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.599138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.599165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.600393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.603055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.603203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.603238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.584 [2024-10-01 13:52:43.603257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.603309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.603343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.603361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.603376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.603408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.610388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.610796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.610850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.584 [2024-10-01 13:52:43.610874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.611029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.611078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.611108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.611136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.611176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.613160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.613280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.613313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.584 [2024-10-01 13:52:43.613331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.613365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.613402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.613420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.613434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.613465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.620616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.620782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.620817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.584 [2024-10-01 13:52:43.620836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.620872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.620935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.620959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.620976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.621010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.625025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.625298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.625346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.584 [2024-10-01 13:52:43.625368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.625415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.625491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.625511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.625527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.625560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.631340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.631653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.631702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.584 [2024-10-01 13:52:43.631724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.631845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.631889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.631908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.631958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.631997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.635132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.635274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.635309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.584 [2024-10-01 13:52:43.635328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.635363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.635396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.635413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.635429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.635462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.641480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.641630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.641666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.584 [2024-10-01 13:52:43.641685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.641725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.641761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.641779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.641795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.643106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.645721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.645862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.645896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.584 [2024-10-01 13:52:43.645933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.645986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.646021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.646039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.646055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.646087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.653264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.653536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.653582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.584 [2024-10-01 13:52:43.653609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.653655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.653691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.653709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.653726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.653758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.655822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.655967] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.656002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.584 [2024-10-01 13:52:43.656036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.657291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.658246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.658290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.658311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.658443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.584 [2024-10-01 13:52:43.663389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.584 [2024-10-01 13:52:43.663540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.584 [2024-10-01 13:52:43.663576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.584 [2024-10-01 13:52:43.663638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.584 [2024-10-01 13:52:43.663676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.584 [2024-10-01 13:52:43.663710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.584 [2024-10-01 13:52:43.663727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.584 [2024-10-01 13:52:43.663743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.584 [2024-10-01 13:52:43.663783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.667129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.667497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.667543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.585 [2024-10-01 13:52:43.667564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.585 [2024-10-01 13:52:43.667716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.585 [2024-10-01 13:52:43.667766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.585 [2024-10-01 13:52:43.667785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.585 [2024-10-01 13:52:43.667800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.585 [2024-10-01 13:52:43.667833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.674045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.674186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.674221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.585 [2024-10-01 13:52:43.674240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.585 [2024-10-01 13:52:43.674275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.585 [2024-10-01 13:52:43.674313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.585 [2024-10-01 13:52:43.674333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.585 [2024-10-01 13:52:43.674348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.585 [2024-10-01 13:52:43.674378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.677235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.677354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.677394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.585 [2024-10-01 13:52:43.677414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.585 [2024-10-01 13:52:43.677449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.585 [2024-10-01 13:52:43.677481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.585 [2024-10-01 13:52:43.677527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.585 [2024-10-01 13:52:43.677543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.585 [2024-10-01 13:52:43.677576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.684141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.684270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.684313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.585 [2024-10-01 13:52:43.684332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.585 [2024-10-01 13:52:43.685543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.585 [2024-10-01 13:52:43.686461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.585 [2024-10-01 13:52:43.686506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.585 [2024-10-01 13:52:43.686527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.585 [2024-10-01 13:52:43.686657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.687334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.688030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.688083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.585 [2024-10-01 13:52:43.688105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.585 [2024-10-01 13:52:43.688285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.585 [2024-10-01 13:52:43.688412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.585 [2024-10-01 13:52:43.688435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.585 [2024-10-01 13:52:43.688450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.585 [2024-10-01 13:52:43.688490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.695457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.695586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.695619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.585 [2024-10-01 13:52:43.695640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.585 [2024-10-01 13:52:43.695904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.585 [2024-10-01 13:52:43.696108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.585 [2024-10-01 13:52:43.696146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.585 [2024-10-01 13:52:43.696166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.585 [2024-10-01 13:52:43.696208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.697990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.698106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.698139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.585 [2024-10-01 13:52:43.698167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.585 [2024-10-01 13:52:43.698201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.585 [2024-10-01 13:52:43.698232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.585 [2024-10-01 13:52:43.698250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.585 [2024-10-01 13:52:43.698270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.585 [2024-10-01 13:52:43.698303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.585 [2024-10-01 13:52:43.705854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.585 [2024-10-01 13:52:43.706086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.585 [2024-10-01 13:52:43.706129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.586 [2024-10-01 13:52:43.706161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.706207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.706255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.706278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.706299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.706338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.710251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.710622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.710664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.586 [2024-10-01 13:52:43.710689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.710742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.710802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.710828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.710849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.710889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.716814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.716998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.717045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.586 [2024-10-01 13:52:43.717067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.717134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.717167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.717185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.717200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.717231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.720373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.720510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.720544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.586 [2024-10-01 13:52:43.720563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.720596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.720648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.720671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.720686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.720719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.726948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.727091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.727135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.586 [2024-10-01 13:52:43.727156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.728369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.729267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.729308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.729329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.729474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.731049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.731178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.731221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.586 [2024-10-01 13:52:43.731242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.731276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.731308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.731326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.731376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.731411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.738239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.738611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.738657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.586 [2024-10-01 13:52:43.738677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.738821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.738879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.738927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.738945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.738978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.741147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.741262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.741304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.586 [2024-10-01 13:52:43.741324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.741357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.741389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.741406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.741420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.742642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.748578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.748724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.748769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.586 [2024-10-01 13:52:43.748791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.748827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.748860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.748877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.748893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.748945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.752688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.753126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.753169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.586 [2024-10-01 13:52:43.753193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.753330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.753376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.753404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.753427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.753463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.758683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.758808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.758850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.586 [2024-10-01 13:52:43.758871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.586 [2024-10-01 13:52:43.759490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.586 [2024-10-01 13:52:43.759686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.586 [2024-10-01 13:52:43.759723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.586 [2024-10-01 13:52:43.759741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.586 [2024-10-01 13:52:43.759861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.586 [2024-10-01 13:52:43.763036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.586 [2024-10-01 13:52:43.763153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.586 [2024-10-01 13:52:43.763187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.586 [2024-10-01 13:52:43.763205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.763239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.763271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.763289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.763304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.763336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.769421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.769552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.769587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.587 [2024-10-01 13:52:43.769606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.769670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.769704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.769723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.769738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.769769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.773892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.774029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.774061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.587 [2024-10-01 13:52:43.774080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.774114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.774145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.774163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.774178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.774210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.781091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.781214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.781252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.587 [2024-10-01 13:52:43.781270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.781530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.781680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.781711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.781728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.781770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.783994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.784105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.784137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.587 [2024-10-01 13:52:43.784156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.784188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.784220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.784238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.784252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.785473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.791401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.791529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.791561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.587 [2024-10-01 13:52:43.791588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.791622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.791653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.791671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.791686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.791718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.795414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.795541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.795574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.587 [2024-10-01 13:52:43.795593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.795852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.796038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.796073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.796091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.796134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.801502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.801661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.801698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.587 [2024-10-01 13:52:43.801717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.802385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.802653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.802690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.802714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.802871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.805957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.806105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.806140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.587 [2024-10-01 13:52:43.806193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.806231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.806265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.806284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.806299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.806331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.812425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.812553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.812586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.587 [2024-10-01 13:52:43.812604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.812638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.812669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.812687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.812703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.812734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.816055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.816167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.816198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.587 [2024-10-01 13:52:43.816217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.816801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.817036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.817066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.817083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.817202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.824195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.824372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.824405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.587 [2024-10-01 13:52:43.824423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.824680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.824837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.824897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.824937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.825007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.587 [2024-10-01 13:52:43.826819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.587 [2024-10-01 13:52:43.826944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.587 [2024-10-01 13:52:43.826977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.587 [2024-10-01 13:52:43.826996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.587 [2024-10-01 13:52:43.827030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.587 [2024-10-01 13:52:43.827061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.587 [2024-10-01 13:52:43.827079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.587 [2024-10-01 13:52:43.827107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.587 [2024-10-01 13:52:43.827139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.834520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.834656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.834688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.588 [2024-10-01 13:52:43.834706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.834740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.834771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.834789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.834803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.834845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.838489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.838611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.838643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.588 [2024-10-01 13:52:43.838661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.838943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.839102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.839127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.839143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.839183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.844643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.844821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.844866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.588 [2024-10-01 13:52:43.844892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.845570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.845797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.845840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.845863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.846012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.848803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.848958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.848991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.588 [2024-10-01 13:52:43.849010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.849045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.849077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.849095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.849109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.849141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.855440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.855630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.855666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.588 [2024-10-01 13:52:43.855686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.855722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.855755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.855773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.855800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.855833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.858898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.859047] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.859080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.588 [2024-10-01 13:52:43.859141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.859754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.859977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.860012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.860029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.860148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.865874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.866039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.866074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.588 [2024-10-01 13:52:43.866094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.866292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.866386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.866411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.866428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.866462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 8240.93 IOPS, 32.19 MiB/s [2024-10-01 13:52:43.870137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.870723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.870769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.588 [2024-10-01 13:52:43.870790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.870964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.871087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.871110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.871127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.871241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 00:18:34.588 Latency(us) 00:18:34.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.588 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:34.588 Verification LBA range: start 0x0 length 0x4000 00:18:34.588 NVMe0n1 : 15.01 8240.91 32.19 0.00 0.00 15497.88 1608.61 20614.05 00:18:34.588 =================================================================================================================== 00:18:34.588 Total : 8240.91 32.19 0.00 0.00 15497.88 1608.61 20614.05 00:18:34.588 [2024-10-01 13:52:43.876955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.877153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.877189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.588 [2024-10-01 13:52:43.877210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.877235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.877262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.877278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.877294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.877314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.880219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.880370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.880409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.588 [2024-10-01 13:52:43.880434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.880464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.880490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.880511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.880532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.588 [2024-10-01 13:52:43.880567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.588 [2024-10-01 13:52:43.887093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.588 [2024-10-01 13:52:43.887294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.588 [2024-10-01 13:52:43.887329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.588 [2024-10-01 13:52:43.887349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.588 [2024-10-01 13:52:43.887375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.588 [2024-10-01 13:52:43.887397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.588 [2024-10-01 13:52:43.887414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.588 [2024-10-01 13:52:43.887430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.887468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.890302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.890409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.890439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.589 [2024-10-01 13:52:43.890458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.890481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.890549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.890569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.890584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.890603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.897211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.897354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.897386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.589 [2024-10-01 13:52:43.897405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.897429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.897449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.897466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.897481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.897500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.900363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.900453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.900481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.589 [2024-10-01 13:52:43.900500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.900522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.900542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.900557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.900572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.900590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.907295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.907405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.907437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.589 [2024-10-01 13:52:43.907455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.907478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.907513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.907532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.907548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.907596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.910420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.910506] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.910545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.589 [2024-10-01 13:52:43.910566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.910588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.910608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.910622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.910636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.910656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.917368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.917492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.917523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.589 [2024-10-01 13:52:43.917541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.917564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.917584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.917598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.917613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.917631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.920475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.920569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.920598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.589 [2024-10-01 13:52:43.920617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.920639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.920659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.920674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.920689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.920708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.927454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.927617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.927648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.589 [2024-10-01 13:52:43.927701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.927750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.927775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.927790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.927806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.927825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 [2024-10-01 13:52:43.930545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.589 [2024-10-01 13:52:43.930646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.589 [2024-10-01 13:52:43.930675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.589 [2024-10-01 13:52:43.930694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.589 [2024-10-01 13:52:43.930715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.589 [2024-10-01 13:52:43.930735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.589 [2024-10-01 13:52:43.930749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.589 [2024-10-01 13:52:43.930764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.589 [2024-10-01 13:52:43.930782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.589 Received shutdown signal, test time was about 15.000000 seconds 00:18:34.589 00:18:34.589 Latency(us) 00:18:34.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.589 =================================================================================================================== 00:18:34.589 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.589 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:34.589 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=1 00:18:34.589 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # false 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # trap - ERR 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # print_backtrace 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:34.590 ========== Backtrace start: ========== 00:18:34.590 00:18:34.590 in /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh:68 -> main(["--transport=tcp"]) 00:18:34.590 ... 00:18:34.590 63 cat $testdir/try.txt 00:18:34.590 64 # if this test fails it means we didn't fail over to the second 00:18:34.590 65 count="$(grep -c "Resetting controller successful" < $testdir/try.txt)" 00:18:34.590 66 00:18:34.590 67 if ((count != 3)); then 00:18:34.590 => 68 false 00:18:34.590 69 fi 00:18:34.590 70 00:18:34.590 71 # Part 2 of the test. Start removing ports, starting with the one we are connected to, confirm that the ctrlr remains active until the final trid is removed. 00:18:34.590 72 $rootdir/build/examples/bdevperf -z -r $bdevperf_rpc_sock -q 128 -o 4096 -w verify -t 1 -f &> $testdir/try.txt & 00:18:34.590 73 bdevperf_pid=$! 00:18:34.590 ... 00:18:34.590 00:18:34.590 ========== Backtrace end ========== 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # process_shm --id 0 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@808 -- # type=--id 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@809 -- # id=0 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:34.590 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:34.590 nvmf_trace.0 00:18:34.881 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@823 -- # return 0 00:18:34.881 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:34.881 [2024-10-01 13:52:26.984946] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:34.881 [2024-10-01 13:52:26.985074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75867 ] 00:18:34.881 [2024-10-01 13:52:27.128016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.881 [2024-10-01 13:52:27.283318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.881 [2024-10-01 13:52:27.361257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:34.881 Running I/O for 15 seconds... 00:18:34.881 7184.00 IOPS, 28.06 MiB/s [2024-10-01 13:52:30.070093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.881 [2024-10-01 13:52:30.070201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.881 [2024-10-01 13:52:30.070237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.881 [2024-10-01 13:52:30.070255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.881 [2024-10-01 13:52:30.070274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.070654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.070976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.070993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.071008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.071040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.071083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.071114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.071146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.071178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.882 [2024-10-01 13:52:30.071210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.882 [2024-10-01 13:52:30.071646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.882 [2024-10-01 13:52:30.071663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.071677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.071966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.071983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.071998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.883 [2024-10-01 13:52:30.072423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.072984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.883 [2024-10-01 13:52:30.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.883 [2024-10-01 13:52:30.073022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.073490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.073966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.073984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.074000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.074017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.884 [2024-10-01 13:52:30.074040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.074058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.074073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.074096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.074112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.074129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.074144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.074160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.074176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.074192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.884 [2024-10-01 13:52:30.074207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.884 [2024-10-01 13:52:30.074223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.885 [2024-10-01 13:52:30.074238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.885 [2024-10-01 13:52:30.074270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.885 [2024-10-01 13:52:30.074302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.885 [2024-10-01 13:52:30.074334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.885 [2024-10-01 13:52:30.074366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.885 [2024-10-01 13:52:30.074397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.885 [2024-10-01 13:52:30.074429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.885 [2024-10-01 13:52:30.074469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.885 [2024-10-01 13:52:30.074513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.885 [2024-10-01 13:52:30.074558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cc770 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.074597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.885 [2024-10-01 13:52:30.074610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.885 [2024-10-01 13:52:30.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:18:34.885 [2024-10-01 13:52:30.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074729] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9cc770 was disconnected and freed. reset controller. 00:18:34.885 [2024-10-01 13:52:30.074879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.885 [2024-10-01 13:52:30.074908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.885 [2024-10-01 13:52:30.074964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.074979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.885 [2024-10-01 13:52:30.074994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.075009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.885 [2024-10-01 13:52:30.075023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.885 [2024-10-01 13:52:30.075038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.076103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.885 [2024-10-01 13:52:30.076148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.885 [2024-10-01 13:52:30.076540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.885 [2024-10-01 13:52:30.076574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.885 [2024-10-01 13:52:30.076593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.076717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.885 [2024-10-01 13:52:30.076801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.885 [2024-10-01 13:52:30.076826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.885 [2024-10-01 13:52:30.076845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.885 [2024-10-01 13:52:30.076880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.885 [2024-10-01 13:52:30.087090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.885 [2024-10-01 13:52:30.087291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.885 [2024-10-01 13:52:30.087337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.885 [2024-10-01 13:52:30.087359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.087396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.885 [2024-10-01 13:52:30.087430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.885 [2024-10-01 13:52:30.087448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.885 [2024-10-01 13:52:30.087466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.885 [2024-10-01 13:52:30.087500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.885 [2024-10-01 13:52:30.097210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.885 [2024-10-01 13:52:30.097361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.885 [2024-10-01 13:52:30.097396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.885 [2024-10-01 13:52:30.097415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.097451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.885 [2024-10-01 13:52:30.097484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.885 [2024-10-01 13:52:30.097503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.885 [2024-10-01 13:52:30.097521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.885 [2024-10-01 13:52:30.097561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.885 [2024-10-01 13:52:30.107887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.885 [2024-10-01 13:52:30.108065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.885 [2024-10-01 13:52:30.108100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.885 [2024-10-01 13:52:30.108119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.108155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.885 [2024-10-01 13:52:30.108189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.885 [2024-10-01 13:52:30.108209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.885 [2024-10-01 13:52:30.108227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.885 [2024-10-01 13:52:30.108290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.885 [2024-10-01 13:52:30.118131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.885 [2024-10-01 13:52:30.118292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.885 [2024-10-01 13:52:30.118327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.885 [2024-10-01 13:52:30.118346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.119296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.885 [2024-10-01 13:52:30.119944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.885 [2024-10-01 13:52:30.119982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.885 [2024-10-01 13:52:30.120003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.885 [2024-10-01 13:52:30.120114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.885 [2024-10-01 13:52:30.128281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.885 [2024-10-01 13:52:30.128440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.885 [2024-10-01 13:52:30.128474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.885 [2024-10-01 13:52:30.128494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.885 [2024-10-01 13:52:30.128530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.885 [2024-10-01 13:52:30.128563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.885 [2024-10-01 13:52:30.128581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.885 [2024-10-01 13:52:30.128598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.128630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.138392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.138569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.138605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.138634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.138672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.138705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.886 [2024-10-01 13:52:30.138723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.886 [2024-10-01 13:52:30.138740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.138773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.148507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.148666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.148701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.148772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.148810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.148844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.886 [2024-10-01 13:52:30.148862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.886 [2024-10-01 13:52:30.148878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.148927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.158619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.158769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.158812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.158831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.158866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.158899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.886 [2024-10-01 13:52:30.158933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.886 [2024-10-01 13:52:30.158951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.158985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.169390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.169756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.169802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.169823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.169900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.169958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.886 [2024-10-01 13:52:30.169978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.886 [2024-10-01 13:52:30.169994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.170026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.179496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.179634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.179668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.179688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.179724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.179757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.886 [2024-10-01 13:52:30.179810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.886 [2024-10-01 13:52:30.179828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.180749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.189594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.189728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.189769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.189790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.189825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.189857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.886 [2024-10-01 13:52:30.189875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.886 [2024-10-01 13:52:30.189892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.189940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.200973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.201123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.201157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.201184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.201220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.201254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.886 [2024-10-01 13:52:30.201272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.886 [2024-10-01 13:52:30.201288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.886 [2024-10-01 13:52:30.201319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.886 [2024-10-01 13:52:30.211085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.886 [2024-10-01 13:52:30.211229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.886 [2024-10-01 13:52:30.211269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.886 [2024-10-01 13:52:30.211289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.886 [2024-10-01 13:52:30.211325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.886 [2024-10-01 13:52:30.211358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.211376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.211393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.211425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.221182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.221368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.221402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.221421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.221456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.221505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.221526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.221542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.221574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.231326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.231464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.231505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.231525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.231561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.231594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.231613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.231629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.231660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.241662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.242037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.242082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.242104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.242179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.242219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.242238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.242255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.242294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.251762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.251898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.251946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.251967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.252035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.252069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.252087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.252104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.252135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.261861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.262008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.262056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.262077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.262113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.262146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.262164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.262180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.262211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.271977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.272126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.272164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.272184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.273408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.273629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.273665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.273684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.273720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.283371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.283541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.283576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.283596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.283632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.283666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.283684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.283742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.283777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.293878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.294057] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.294101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.294122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.294159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.294192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.294210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.294227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.294259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.304541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.305432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.305479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.305501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.305693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.305749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.305771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.305788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.305822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.887 [2024-10-01 13:52:30.316075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.887 [2024-10-01 13:52:30.316228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.887 [2024-10-01 13:52:30.316264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.887 [2024-10-01 13:52:30.316285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.887 [2024-10-01 13:52:30.316320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.887 [2024-10-01 13:52:30.317241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.887 [2024-10-01 13:52:30.317279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.887 [2024-10-01 13:52:30.317300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.887 [2024-10-01 13:52:30.317509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.327322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.327466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.327539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.327561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.327596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.327630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.327647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.327664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.327698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.337947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.338108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.338153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.338174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.338210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.338243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.338262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.338279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.338311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.348628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.349500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.349547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.349569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.349773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.349849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.349873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.349890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.349941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.358737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.358890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.358938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.358960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.358997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.360274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.360313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.360333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.360576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.368851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.369006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.369041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.369060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.369863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.370084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.370119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.370139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.371148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.379227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.379375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.379419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.379441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.379477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.379511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.379529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.379546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.379579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.389777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.389984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.390027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.390049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.390085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.390118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.390136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.390153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.391178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.401013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.401180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.401217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.401236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.401272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.401305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.401323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.401340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.401371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.411514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.411669] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.411718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.411740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.411776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.411809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.411827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.411843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.411875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.422995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.888 [2024-10-01 13:52:30.423298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.888 [2024-10-01 13:52:30.423343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.888 [2024-10-01 13:52:30.423365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.888 [2024-10-01 13:52:30.423412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.888 [2024-10-01 13:52:30.423447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.888 [2024-10-01 13:52:30.423467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.888 [2024-10-01 13:52:30.423484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.888 [2024-10-01 13:52:30.423518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.888 [2024-10-01 13:52:30.433755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.433951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.433989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.434044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.434086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.435051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.435089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.435114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.435352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.445083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.445248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.445284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.445304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.445340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.445373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.445393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.445409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.445442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.455708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.455886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.455934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.455956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.455992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.456025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.456052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.456067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.456100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.466712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.467591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.467639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.467662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.467878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.467945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.468002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.468020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.468055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.476821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.476998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.477034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.477053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.477089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.477121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.477139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.477155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.477187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.486952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.487107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.487142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.487161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.487196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.487229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.487249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.487265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.487304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.497143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.498055] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.498104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.498127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.498328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.498385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.498407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.498423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.498456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.508446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.508806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.508857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.508879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.508938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.508977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.508995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.509012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.509907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.520068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.520283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.520319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.520338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.889 [2024-10-01 13:52:30.520375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.889 [2024-10-01 13:52:30.520408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.889 [2024-10-01 13:52:30.520425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.889 [2024-10-01 13:52:30.520442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.889 [2024-10-01 13:52:30.520475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.889 [2024-10-01 13:52:30.530571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.889 [2024-10-01 13:52:30.530745] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.889 [2024-10-01 13:52:30.530782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.889 [2024-10-01 13:52:30.530802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.530838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.530872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.530890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.530906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.530960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.541183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.542063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.542114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.542137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.542349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.542404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.542425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.542441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.542475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.552731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.552951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.552991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.553012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.553959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.554208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.554246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.554267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.554350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.564224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.564404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.564440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.564460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.564498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.564531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.564549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.564567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.564600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.574854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.575053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.575089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.575109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.575146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.575178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.575197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.575244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.575279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.585634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.586532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.586599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.586621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.586848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.586924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.586947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.586964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.586998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.597085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.597240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.597275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.597294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.597330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.597363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.597381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.597397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.598308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.608265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.608419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.608455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.608475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.608512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.608544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.608562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.608578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.608611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.618727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.618877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.618967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.619011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.619049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.619082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.619100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.619116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.619147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.629490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.630358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.630424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.630446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.630644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.630694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.630714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.890 [2024-10-01 13:52:30.630730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.890 [2024-10-01 13:52:30.630763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.890 [2024-10-01 13:52:30.639597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.890 [2024-10-01 13:52:30.639756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.890 [2024-10-01 13:52:30.639798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.890 [2024-10-01 13:52:30.639827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.890 [2024-10-01 13:52:30.641097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.890 [2024-10-01 13:52:30.641372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.890 [2024-10-01 13:52:30.641423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.641444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.642405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.649730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.650006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.650048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.650070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.650967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.651227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.651265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.651287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.652345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.660740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.661000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.661041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.661062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.661104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.661138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.661157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.661175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.661208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.671723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.671985] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.672024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.672045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.673009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.673259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.673297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.673319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.673402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.683190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.683452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.683490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.683511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.683551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.683586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.683604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.683622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.683694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.694028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.694276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.694316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.694336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.694376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.694410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.694429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.694447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.694491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.705851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.706129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.706169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.706190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.706231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.706265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.706284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.706315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.706350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.716555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.716804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.716842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.716863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.717840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.718127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.718166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.718188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.718280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.727956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.728208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.728247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.728321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.728363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.728397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.728417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.728434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.891 [2024-10-01 13:52:30.728468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.891 [2024-10-01 13:52:30.738460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.891 [2024-10-01 13:52:30.738717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.891 [2024-10-01 13:52:30.738755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.891 [2024-10-01 13:52:30.738776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.891 [2024-10-01 13:52:30.738816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.891 [2024-10-01 13:52:30.738849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.891 [2024-10-01 13:52:30.738867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.891 [2024-10-01 13:52:30.738884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.738936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.749900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.750288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.750338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.750362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.750421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.750458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.750477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.750495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.750529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.760771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.761053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.761091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.761113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.762114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.762376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.762444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.762466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.762593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.772563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.772814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.772853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.772874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.772930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.772988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.773011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.773029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.773064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.783385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.783626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.783664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.783685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.783724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.783760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.783779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.783797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.783829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.794976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.795324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.795373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.795396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.795461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.795501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.795521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.795539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.795573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.805816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.806065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.806104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.806125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.807122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.807362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.807404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.807425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.807528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.817344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.817603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.817641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.817663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.817703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.817737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.817755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.817773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.817807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.828079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.828324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.828362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.828383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.828422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.828455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.828473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.828490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.828523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.838404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.838592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.838630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.838650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.838713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.839965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.840004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.840026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.840942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.848550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.848704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.848740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.892 [2024-10-01 13:52:30.848773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.892 [2024-10-01 13:52:30.848813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.892 [2024-10-01 13:52:30.848846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.892 [2024-10-01 13:52:30.848863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.892 [2024-10-01 13:52:30.848880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.892 [2024-10-01 13:52:30.848926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.892 [2024-10-01 13:52:30.859984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.892 [2024-10-01 13:52:30.860195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.892 [2024-10-01 13:52:30.860233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.860253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.860290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.860324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.860342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.860360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.860392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 7601.50 IOPS, 29.69 MiB/s [2024-10-01 13:52:30.871766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.872123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.872163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.872215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.872261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.872297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.872316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.872363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.872398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.883073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.883275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.883312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.883332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.883379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.884323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.884364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.884385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.884627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.894366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.894550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.894588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.894609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.894646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.894680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.894697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.894714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.894746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.904797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.904970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.905007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.905027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.905064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.905097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.905115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.905131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.905164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.915493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.916412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.916463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.916485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.916675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.916725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.916745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.916761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.916800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.926943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.927107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.927142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.927161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.927196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.927228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.927245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.927261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.928169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.938216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.938384] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.938420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.938440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.938477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.938520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.938551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.938582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.938615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.948796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.949056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.949112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.949134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.949172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.949246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.949275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.949292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.949324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.959558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.960474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.960524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.960546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.960744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.893 [2024-10-01 13:52:30.960795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.893 [2024-10-01 13:52:30.960816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.893 [2024-10-01 13:52:30.960834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.893 [2024-10-01 13:52:30.960867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.893 [2024-10-01 13:52:30.970894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.893 [2024-10-01 13:52:30.971273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.893 [2024-10-01 13:52:30.971321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.893 [2024-10-01 13:52:30.971344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.893 [2024-10-01 13:52:30.971390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:30.971426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:30.971455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:30.971472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:30.972401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:30.982474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:30.982674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:30.982712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:30.982732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:30.982769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:30.982802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:30.982820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:30.982837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:30.982943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:30.993027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:30.993182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:30.993217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:30.993237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:30.993273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:30.993306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:30.993324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:30.993340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:30.993380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.003695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.004573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.004621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.004644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.004833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.004882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.004902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.004937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.004973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.015186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.015346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.015381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.015400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.015436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.016352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.016392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.016414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.016630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.026246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.026396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.026432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.026493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.026530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.026583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.026603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.026619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.026651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.036636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.036787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.036823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.036841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.036877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.036924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.036946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.036963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.036996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.047282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.048161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.048209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.048232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.048410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.048477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.048500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.048518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.048552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.058692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.058850] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.058885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.058905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.058957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.058991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.059040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.059066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.059988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.070365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.070526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.070576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.070597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.070635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.070668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.070686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.070703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.070735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.894 [2024-10-01 13:52:31.080864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.894 [2024-10-01 13:52:31.081054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.894 [2024-10-01 13:52:31.081089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.894 [2024-10-01 13:52:31.081113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.894 [2024-10-01 13:52:31.081148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.894 [2024-10-01 13:52:31.081181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.894 [2024-10-01 13:52:31.081199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.894 [2024-10-01 13:52:31.081216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.894 [2024-10-01 13:52:31.081248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.092306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.092597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.092641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.092664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.092709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.092744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.092762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.092779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.092811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.102413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.103790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.103836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.103858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.104111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.104170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.104191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.104207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.104240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.112511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.112664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.112699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.112718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.112754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.112787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.112804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.112821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.112853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.123498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.123660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.123701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.123723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.123761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.123803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.123821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.123837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.123869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.134744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.134900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.134949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.134970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.135048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.135979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.136012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.136032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.136250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.145950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.146167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.146209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.146229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.146264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.146297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.146314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.146334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.146366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.157049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.157210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.157251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.157272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.157308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.157341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.157359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.157375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.157409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.168152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.169053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.169097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.169118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.169296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.169352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.169374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.169428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.169464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.179611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.179765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.179799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.895 [2024-10-01 13:52:31.179818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.895 [2024-10-01 13:52:31.179854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.895 [2024-10-01 13:52:31.179887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.895 [2024-10-01 13:52:31.179904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.895 [2024-10-01 13:52:31.179938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.895 [2024-10-01 13:52:31.180842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.895 [2024-10-01 13:52:31.191219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.895 [2024-10-01 13:52:31.191377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.895 [2024-10-01 13:52:31.191418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.191439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.191475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.191509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.191527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.191544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.191577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.201713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.201861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.201902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.201937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.201974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.202007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.202024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.202040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.202073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.212863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.213761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.213807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.213828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.214036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.214093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.214114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.214130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.214164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.224206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.224529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.224572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.224593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.224649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.224686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.224711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.224727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.224759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.235396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.236142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.236185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.236218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.236308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.236347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.236366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.236384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.236416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.246908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.247072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.247111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.247133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.247169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.247249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.247278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.247294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.247332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.258551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.258900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.258956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.258978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.259024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.259061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.259079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.259096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.259129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.270034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.270202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.270249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.270269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.271246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.271487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.271521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.271541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.271620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.281286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.281450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.281490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.281512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.281548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.281581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.281599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.281616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.281689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.292209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.292372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.292409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.292428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.292464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.292498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.896 [2024-10-01 13:52:31.292516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.896 [2024-10-01 13:52:31.292533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.896 [2024-10-01 13:52:31.292565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.896 [2024-10-01 13:52:31.302984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.896 [2024-10-01 13:52:31.303885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.896 [2024-10-01 13:52:31.303945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.896 [2024-10-01 13:52:31.303969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.896 [2024-10-01 13:52:31.304164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.896 [2024-10-01 13:52:31.304225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.304248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.304265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.304298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.314440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.314611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.314647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.314667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.314703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.314736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.314754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.314770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.315694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.325682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.325846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.325882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.325957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.325998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.326031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.326050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.326066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.326101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.336434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.336595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.336632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.336650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.336686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.336719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.336736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.336751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.336783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.347092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.347963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.348011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.348033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.348225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.348288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.348311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.348328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.348360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.358595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.358746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.358788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.358806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.358842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.358875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.358948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.358967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.359868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.369797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.369992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.370028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.370047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.370085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.370117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.370135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.370151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.370183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.380283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.380442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.380477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.380497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.380533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.380565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.380582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.380599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.380631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.391833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.392054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.392091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.392111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.392148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.392181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.392200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.392216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.392248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.402499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.402692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.402732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.402752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.402791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.402824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.402842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.402858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.403777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.897 [2024-10-01 13:52:31.413688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.897 [2024-10-01 13:52:31.413864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.897 [2024-10-01 13:52:31.413901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.897 [2024-10-01 13:52:31.413947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.897 [2024-10-01 13:52:31.413985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.897 [2024-10-01 13:52:31.414019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.897 [2024-10-01 13:52:31.414037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.897 [2024-10-01 13:52:31.414054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.897 [2024-10-01 13:52:31.414087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.424091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.424247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.424282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.424302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.424338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.424371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.424389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.424405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.424436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.434750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.435625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.435674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.435707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.435947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.435998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.436019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.436036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.436069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.446084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.446257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.446292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.446311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.446347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.446380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.446397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.446413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.447340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.457217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.457373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.457409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.457428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.457464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.457498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.457516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.457533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.457565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.467671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.467817] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.467852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.467872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.467907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.467959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.467978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.468029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.468064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.478259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.479138] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.479182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.479203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.479384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.479450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.479474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.479489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.479522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.489774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.489959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.489995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.490014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.490050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.490082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.490100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.490117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.491079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.500410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.500572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.500608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.500627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.501548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.502210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.502249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.502279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.502380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.510513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.510724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.510760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.510779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.511994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.512859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.512898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.512946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.513088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.520679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.520816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.520851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.520871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.898 [2024-10-01 13:52:31.521784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.898 [2024-10-01 13:52:31.522062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.898 [2024-10-01 13:52:31.522101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.898 [2024-10-01 13:52:31.522122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.898 [2024-10-01 13:52:31.522216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.898 [2024-10-01 13:52:31.531894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.898 [2024-10-01 13:52:31.532046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.898 [2024-10-01 13:52:31.532080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.898 [2024-10-01 13:52:31.532100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.532134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.532179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.532199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.532215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.532247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.542333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.542472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.542507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.542526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.542585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.542654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.542674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.542690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.542722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.553329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.554194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.554243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.554264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.554442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.554491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.554510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.554526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.554572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.564789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.564957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.564992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.565011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.565048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.565081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.565100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.565116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.566024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.576104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.576268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.576303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.576322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.576358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.576391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.576409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.576425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.576496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.586484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.586650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.586686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.586705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.586741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.586773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.586791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.586807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.586838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.597199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.598092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.598144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.598167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.598345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.598411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.598435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.598451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.598484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.608660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.608811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.608847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.608865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.608899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.608950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.608970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.608986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.609879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.619882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.620059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.620094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.620154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.620192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.620226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.620243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.620260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.899 [2024-10-01 13:52:31.620291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.899 [2024-10-01 13:52:31.630291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.899 [2024-10-01 13:52:31.630472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.899 [2024-10-01 13:52:31.630508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.899 [2024-10-01 13:52:31.630527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.899 [2024-10-01 13:52:31.630579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.899 [2024-10-01 13:52:31.630614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.899 [2024-10-01 13:52:31.630632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.899 [2024-10-01 13:52:31.630648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.630680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.640985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.641869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.641932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.641957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.642143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.642201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.642221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.642238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.642272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.652464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.652644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.652680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.652700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.652737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.653672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.653753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.653775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.654012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.663696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.663856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.663892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.663926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.663973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.664006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.664024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.664041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.664072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.674131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.674294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.674330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.674349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.674384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.674417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.674436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.674453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.674485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.684779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.685659] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.685708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.685731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.685950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.686018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.686042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.686059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.686093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.696297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.696471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.696508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.696527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.696563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.696597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.696615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.696631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.697558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.707738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.707909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.707959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.707979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.708016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.708049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.708067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.708084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.708116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.718292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.718461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.718497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.718515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.718564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.718600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.718619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.718637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.718668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.729056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.729939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.729989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.730051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.730235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.730285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.730306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.730322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.730354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.740504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.900 [2024-10-01 13:52:31.740660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.900 [2024-10-01 13:52:31.740695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.900 [2024-10-01 13:52:31.740714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.900 [2024-10-01 13:52:31.740750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.900 [2024-10-01 13:52:31.740783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.900 [2024-10-01 13:52:31.740801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.900 [2024-10-01 13:52:31.740817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.900 [2024-10-01 13:52:31.741737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.900 [2024-10-01 13:52:31.751704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.751867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.751903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.751940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.751979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.752013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.752032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.752049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.752081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.762230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.762401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.762437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.762457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.762493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.762526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.762573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.762633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.762669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.773013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.773927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.773975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.773997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.774192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.774253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.774273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.774288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.774322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.784474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.784636] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.784671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.784691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.784727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.784761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.784779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.784795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.785731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.795754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.795935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.795971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.795990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.796026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.796060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.796077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.796094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.796126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.806219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.806458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.806495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.806515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.806566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.806602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.806621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.806637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.806670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.816830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.817710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.817759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.817781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.817998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.818051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.818071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.818087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.818121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.828266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.828431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.828467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.828486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.828521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.829452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.829492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.829514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.829723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.839392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.839556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.839592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.839611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.839690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.839725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.839743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.839759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.839803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.849785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.849956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.849992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.850011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.901 [2024-10-01 13:52:31.850048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.901 [2024-10-01 13:52:31.850080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.901 [2024-10-01 13:52:31.850098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.901 [2024-10-01 13:52:31.850114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.901 [2024-10-01 13:52:31.850146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.901 [2024-10-01 13:52:31.860519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.901 [2024-10-01 13:52:31.861403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.901 [2024-10-01 13:52:31.861451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.901 [2024-10-01 13:52:31.861473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.861667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.861717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.861737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.861754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.861787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 7961.00 IOPS, 31.10 MiB/s [2024-10-01 13:52:31.871923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.872080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.872115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.872134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.872170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.873108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.873148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.873203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.873413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.883084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.883236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.883271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.883290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.883325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.883357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.883374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.883390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.883422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.893732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.893898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.893948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.893969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.894006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.894039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.894057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.894074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.894106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.904501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.904656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.904691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.904710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.905474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.905708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.905747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.905767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.905810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.914611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.914759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.914844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.914867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.914904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.914956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.914981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.914996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.916239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.924714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.924869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.924903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.924936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.924974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.925007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.925024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.925040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.925071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.935599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.935752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.935787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.935805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.935841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.935874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.935893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.935937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.935972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.946079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.946225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.946260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.946279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.946315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.947286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.947326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.947347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.947543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.957210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.957352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.957387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.957405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.957440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.957473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.902 [2024-10-01 13:52:31.957490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.902 [2024-10-01 13:52:31.957505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.902 [2024-10-01 13:52:31.957537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.902 [2024-10-01 13:52:31.967567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.902 [2024-10-01 13:52:31.967708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.902 [2024-10-01 13:52:31.967743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.902 [2024-10-01 13:52:31.967762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.902 [2024-10-01 13:52:31.967797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.902 [2024-10-01 13:52:31.967830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:31.967848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:31.967863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:31.967895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:31.978297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:31.979217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:31.979267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:31.979290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:31.979475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:31.979525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:31.979546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:31.979562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:31.979596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:31.989800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:31.989998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:31.990034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:31.990060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:31.990097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:31.991034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:31.991072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:31.991093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:31.991288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.000964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.001109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.001144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.001163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.001199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.001232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.001249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.001266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.001298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.011390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.011545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.011581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.011600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.011636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.011669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.011687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.011703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.011734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.022062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.022953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.023007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.023073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.023276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.023328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.023348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.023364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.023397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.033452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.033594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.033628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.033646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.033681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.033713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.033730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.033747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.034677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.044552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.044707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.044743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.044763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.044798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.044831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.044850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.044866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.044897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.054947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.055102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.055138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.055158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.055194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.055227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.055286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.055304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.055337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.066310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.066641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.066694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.066716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.066762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.066807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.066825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.066841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.066873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.076960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.903 [2024-10-01 13:52:32.077123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.903 [2024-10-01 13:52:32.077157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.903 [2024-10-01 13:52:32.077176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.903 [2024-10-01 13:52:32.077211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.903 [2024-10-01 13:52:32.077245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.903 [2024-10-01 13:52:32.077263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.903 [2024-10-01 13:52:32.077279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.903 [2024-10-01 13:52:32.078190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.903 [2024-10-01 13:52:32.088043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.088193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.088229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.088248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.088284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.088317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.088335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.088352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.088384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.098452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.098672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.098708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.098728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.098764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.098797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.098826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.098842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.098874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.109041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.109895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.109955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.109978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.110163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.110236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.110262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.110278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.110315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.120418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.120588] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.120623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.120642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.120679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.121615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.121646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.121664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.121863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.131490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.132252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.132301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.132324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.132475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.132518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.132538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.132554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.132588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.143049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.143206] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.143241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.143260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.143306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.143339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.143356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.143373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.143405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.153817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.154720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.154768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.154790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.155002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.155053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.155074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.155091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.155125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.165310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.165474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.165509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.165528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.165564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.166478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.166517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.166581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.166782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.176369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.176527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.176562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.176581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.904 [2024-10-01 13:52:32.176616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.904 [2024-10-01 13:52:32.176648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.904 [2024-10-01 13:52:32.176666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.904 [2024-10-01 13:52:32.176682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.904 [2024-10-01 13:52:32.176714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.904 [2024-10-01 13:52:32.186839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.904 [2024-10-01 13:52:32.187003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.904 [2024-10-01 13:52:32.187039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.904 [2024-10-01 13:52:32.187057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.187093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.187126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.187144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.187160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.187191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.197394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.198269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.198317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.198340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.198526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.198608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.198633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.198649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.198683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.208701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.208844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.208933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.208956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.208994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.209896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.209948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.209970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.210162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.219801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.219963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.220007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.220026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.220061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.220094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.220112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.220128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.220160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.230189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.230349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.230385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.230404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.230440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.230471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.230489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.230506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.230550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.240741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.241602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.241650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.241672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.241854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.241991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.242015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.242032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.242066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.252160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.252303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.252337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.252356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.252391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.252422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.252440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.252456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.253367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.263227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.263378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.263414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.263433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.263468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.263500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.263519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.263536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.263567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.273629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.273772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.273819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.273838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.273874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.273906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.273940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.273957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.274025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.284336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.285215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.285264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.285287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.285486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.285547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.905 [2024-10-01 13:52:32.285570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.905 [2024-10-01 13:52:32.285587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.905 [2024-10-01 13:52:32.285620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.905 [2024-10-01 13:52:32.295802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.905 [2024-10-01 13:52:32.295986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.905 [2024-10-01 13:52:32.296022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.905 [2024-10-01 13:52:32.296042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.905 [2024-10-01 13:52:32.296078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.905 [2024-10-01 13:52:32.297012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.297050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.297071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.297270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.307054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.307208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.307242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.307261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.307297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.307331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.307349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.307366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.307398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.318130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.318287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.318327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.318385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.318424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.318459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.318477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.318492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.318525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.328901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.329784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.329829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.329851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.330065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.330123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.330145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.330161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.330195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.339025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.339168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.339208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.339229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.339264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.339298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.339316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.339332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.339364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.349132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.349279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.349320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.349341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.349376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.349408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.349474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.349494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.349528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.360070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.360226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.360261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.360281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.360317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.360349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.360367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.360384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.360415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.370612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.370763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.370804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.370825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.370862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.370895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.370938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.370958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.371853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.382226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.382381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.382416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.382435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.382471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.382505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.382524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.382567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.382603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.393648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.393840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.393876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.393896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.393946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.393998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.394021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.394037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.394070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.405434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.906 [2024-10-01 13:52:32.405765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.906 [2024-10-01 13:52:32.405809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.906 [2024-10-01 13:52:32.405831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.906 [2024-10-01 13:52:32.405877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.906 [2024-10-01 13:52:32.405928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.906 [2024-10-01 13:52:32.405960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.906 [2024-10-01 13:52:32.405978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.906 [2024-10-01 13:52:32.406012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.906 [2024-10-01 13:52:32.416665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.416849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.416884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.416904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.417864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.418112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.418150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.418171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.418253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.427844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.428037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.428074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.428094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.428172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.428206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.428224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.428251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.428283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.438384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.438811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.438861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.438884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.438978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.439032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.439053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.439079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.439113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.449696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.450756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.450812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.450835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.451105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.451161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.451184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.451202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.451236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.459822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.459998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.460033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.460053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.461301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.461575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.461615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.461686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.462633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.469945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.470112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.470147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.470166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.470202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.470245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.470263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.470280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.470312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.480753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.480945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.480982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.481002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.481039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.481072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.481090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.481107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.481138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.491282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.491467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.491502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.491522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.491559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.492498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.492539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.492560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.492764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.502881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.503076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.503140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.503162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.503199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.503232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.503250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.503267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.503299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.513390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.513561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.513597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.513616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.907 [2024-10-01 13:52:32.513653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.907 [2024-10-01 13:52:32.513688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.907 [2024-10-01 13:52:32.513706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.907 [2024-10-01 13:52:32.513723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.907 [2024-10-01 13:52:32.513755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.907 [2024-10-01 13:52:32.523509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.907 [2024-10-01 13:52:32.523676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.907 [2024-10-01 13:52:32.523709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.907 [2024-10-01 13:52:32.523729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.524975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.525176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.525210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.525230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.525265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.533648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.533802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.533837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.533863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.533898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.533978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.533998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.534052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.534089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.544283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.544457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.544492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.544511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.544546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.544580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.544599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.544614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.544647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.554988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.555853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.555901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.555945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.556126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.556175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.556195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.556212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.556247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.566467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.566637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.566671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.566700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.566734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.567647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.567687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.567709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.567979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.577947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.578108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.578144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.578163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.578199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.578232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.578250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.578267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.578299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.588469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.588623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.588657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.588676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.588710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.588742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.588761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.588777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.588810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.599220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.600094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.600141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.600163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.600340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.600414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.600437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.600458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.600492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.610686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.610838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.610873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.610945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.610984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.611897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.611949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.908 [2024-10-01 13:52:32.611982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.908 [2024-10-01 13:52:32.612193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.908 [2024-10-01 13:52:32.621899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.908 [2024-10-01 13:52:32.622069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.908 [2024-10-01 13:52:32.622104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.908 [2024-10-01 13:52:32.622124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.908 [2024-10-01 13:52:32.622161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.908 [2024-10-01 13:52:32.622194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.908 [2024-10-01 13:52:32.622212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.622227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.622259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.632422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.632586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.632621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.632640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.632675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.632707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.632726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.632742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.632774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.643238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.644127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.644176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.644198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.644385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.644434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.644484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.644502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.644537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.654764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.654950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.654987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.655006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.655043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.655968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.656013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.656034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.656249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.665981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.666121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.666155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.666174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.666209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.666243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.666260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.666277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.666308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.676485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.676633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.676667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.676686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.676721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.676755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.676772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.676789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.676821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.687150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.688109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.688159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.688181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.688395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.688445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.688465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.688482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.688515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.698604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.698767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.698803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.698822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.698859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.699788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.699827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.699849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.700069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.709749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.709943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.709979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.710009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.710047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.710080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.710098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.710115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.710148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.720264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.720454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.720492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.720512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.720581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.720616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.720634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.720650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.720682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.731684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.909 [2024-10-01 13:52:32.732039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.909 [2024-10-01 13:52:32.732086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.909 [2024-10-01 13:52:32.732109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.909 [2024-10-01 13:52:32.732157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.909 [2024-10-01 13:52:32.732194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.909 [2024-10-01 13:52:32.732213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.909 [2024-10-01 13:52:32.732229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.909 [2024-10-01 13:52:32.732263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.909 [2024-10-01 13:52:32.743176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.743379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.743417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.743437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.744373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.744612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.744647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.744667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.744749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.754474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.754666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.754703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.754724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.754762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.754796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.754814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.754874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.754927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.765163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.765361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.765398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.765418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.765456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.765489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.765507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.765523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.765555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.775848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.776744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.776794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.776817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.777038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.777089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.777109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.777125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.777159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.787262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.787443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.787481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.787501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.787538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.788467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.788507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.788529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.788745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.798511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.798720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.798795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.798817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.798855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.798888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.798905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.798945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.798981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.809112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.809277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.809313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.809341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.809378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.809411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.809429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.809445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.809476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.819837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.820743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.820792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.820814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.821019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.821068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.821089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.821106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.821140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.831370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.831560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.831595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.831614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.831651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.832616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.832657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.832678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.832907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.842586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.842746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.842785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.842805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.910 [2024-10-01 13:52:32.842841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.910 [2024-10-01 13:52:32.842875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.910 [2024-10-01 13:52:32.842901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.910 [2024-10-01 13:52:32.842932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.910 [2024-10-01 13:52:32.842969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.910 [2024-10-01 13:52:32.853236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.910 [2024-10-01 13:52:32.853415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.910 [2024-10-01 13:52:32.853451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.910 [2024-10-01 13:52:32.853471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.853508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.853541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.853560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.853577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.853608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.863958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.864822] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.864871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.864903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 8140.75 IOPS, 31.80 MiB/s [2024-10-01 13:52:32.866782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.868021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.868061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.868082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.868987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.875403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.875550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.875585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.875603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.875638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.875671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.875689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.875704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.876616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.886714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.886893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.886947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.886969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.887007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.887040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.887058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.887074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.887106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.897303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.897513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.897550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.897569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.897608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.897640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.897659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.897676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.897709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.908195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.909112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.909160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.909229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.909436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.909484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.909504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.909521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.909554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.918330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.919736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.919792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.919815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.920056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.921178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.921226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.921248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.921556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.928444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.929469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.929536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.929566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.929775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.930887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.930946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.930967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.931660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.938746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.938923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.938971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.939004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.939063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.939101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.939162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.939180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.939214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.949458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.949704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.949744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.949763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.950753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.951038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.951079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.911 [2024-10-01 13:52:32.951099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.911 [2024-10-01 13:52:32.952395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.911 [2024-10-01 13:52:32.960860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.911 [2024-10-01 13:52:32.961042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.911 [2024-10-01 13:52:32.961095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.911 [2024-10-01 13:52:32.961123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.911 [2024-10-01 13:52:32.961163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.911 [2024-10-01 13:52:32.961198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.911 [2024-10-01 13:52:32.961216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:32.961233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:32.961265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:32.971877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:32.972116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:32.972158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:32.972178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:32.972218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:32.972252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:32.972270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:32.972287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:32.972319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:32.983650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:32.983862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:32.983907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:32.983947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:32.983987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:32.984021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:32.984040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:32.984057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:32.984104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:32.995029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:32.995230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:32.995271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:32.995292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:32.995331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:32.996325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:32.996369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:32.996391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:32.996629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.006686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:33.006886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:33.006945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:33.006968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:33.007013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:33.007067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:33.007099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:33.007121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:33.007386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.017134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:33.017354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:33.017396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:33.017416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:33.017499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:33.017535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:33.017553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:33.017569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:33.017602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.028657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:33.028865] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:33.028935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:33.028974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:33.029018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:33.029052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:33.029070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:33.029087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:33.029121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.039347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:33.040532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:33.040599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:33.040630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:33.040852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:33.041010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:33.041037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:33.041055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:33.042366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.050528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:33.050733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:33.050774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:33.050794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:33.050838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:33.050891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:33.050940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:33.051008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:33.051298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.061067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:33.061277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:33.061318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:33.061339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:33.061378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:33.061412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:33.061430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:33.061446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:33.061478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.072520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.912 [2024-10-01 13:52:33.072721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.912 [2024-10-01 13:52:33.072759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.912 [2024-10-01 13:52:33.072780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.912 [2024-10-01 13:52:33.072848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.912 [2024-10-01 13:52:33.072891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.912 [2024-10-01 13:52:33.072925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.912 [2024-10-01 13:52:33.072946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.912 [2024-10-01 13:52:33.072987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.912 [2024-10-01 13:52:33.083137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.083317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.083367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.083399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.084394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.084646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.084687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.084708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.084861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.094261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.094507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.094572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.094607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.094660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.094705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.094725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.094741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.095027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.105118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.105292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.105329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.105360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.105412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.105449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.105468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.105484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.105516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.116531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.116722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.116764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.116798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.116852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.116892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.116926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.116947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.116982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.127013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.127210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.127260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.127288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.128379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.128678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.128717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.128738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.128871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.138414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.138629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.138681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.138715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.139057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.139232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.139269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.139299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.139438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.148716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.148942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.148982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.149012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.149068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.149119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.149141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.149160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.149210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.159436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.160340] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.160391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.160414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.913 [2024-10-01 13:52:33.160595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.913 [2024-10-01 13:52:33.160645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.913 [2024-10-01 13:52:33.160666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.913 [2024-10-01 13:52:33.160682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.913 [2024-10-01 13:52:33.160755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.913 [2024-10-01 13:52:33.169560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.913 [2024-10-01 13:52:33.169721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.913 [2024-10-01 13:52:33.169758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.913 [2024-10-01 13:52:33.169777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.171054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.171296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.171340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.171360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.171423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.179669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.180613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.180663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.180686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.180883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.181879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.181931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.181957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.182571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.189780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.189941] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.189985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.190005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.190041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.190074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.190091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.190108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.190140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.200282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.200433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.200469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.200526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.200569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.201484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.201524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.201544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.201769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.211511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.211663] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.211698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.211718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.211753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.211785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.211802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.211818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.211850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.221930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.222099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.222136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.222156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.222192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.222227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.222245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.222262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.222294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.232571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.233452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.233500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.233522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.233700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.233768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.233828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.233846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.233881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.243898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.244062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.244098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.244117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.244152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.244184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.244202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.244218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.245130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.255050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.255213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.255248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.255268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.255305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.255339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.255357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.255373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.255405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.265402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.265555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.265590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.265609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.265645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.265689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.265710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.265726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.914 [2024-10-01 13:52:33.265758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.914 [2024-10-01 13:52:33.276688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.914 [2024-10-01 13:52:33.277003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.914 [2024-10-01 13:52:33.277044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.914 [2024-10-01 13:52:33.277064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.914 [2024-10-01 13:52:33.277122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.914 [2024-10-01 13:52:33.277161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.914 [2024-10-01 13:52:33.277179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.914 [2024-10-01 13:52:33.277196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.277227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.287351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.287504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.287539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.287558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.287594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.287626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.287644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.287660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.288569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.298506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.298682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.298717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.298737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.298774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.298806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.298824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.298840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.298873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.308946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.309103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.309141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.309162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.309239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.309274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.309294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.309311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.309344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.319571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.320440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.320491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.320514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.320693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.320754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.320776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.320793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.320825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.330974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.331142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.331177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.331196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.331232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.331266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.331284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.331300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.332218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.342130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.342304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.342339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.342358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.342394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.342428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.342445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.342500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.342550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.352546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.352707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.352743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.352763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.352799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.352831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.352850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.352865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.352904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.363264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.364155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.364204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.364227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.364981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.365052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.365075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.365092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.366295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.374777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.374944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.374980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.375001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.375037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.375070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.375088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.375105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.376028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.386215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.386415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.915 [2024-10-01 13:52:33.386452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.915 [2024-10-01 13:52:33.386471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.915 [2024-10-01 13:52:33.386509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.915 [2024-10-01 13:52:33.386588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.915 [2024-10-01 13:52:33.386612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.915 [2024-10-01 13:52:33.386629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.915 [2024-10-01 13:52:33.386662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.915 [2024-10-01 13:52:33.396948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.915 [2024-10-01 13:52:33.397124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.397160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.397180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.397238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.397276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.397294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.397311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.397354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.407948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.408818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.408866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.408889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.409116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.409167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.409189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.409206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.409240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.418069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.418223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.418259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.418278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.418316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.419622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.419664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.419686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.419971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.428180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.428358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.428395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.428413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.428450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.428504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.428527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.428544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.428577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.439613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.440521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.440572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.440595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.441464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.442771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.442815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.442837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.443706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.449746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.449887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.449942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.449965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.450006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.450045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.450063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.450079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.450152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.460287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.460482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.460519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.460539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.460580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.460617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.460635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.460652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.460689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.470431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.471514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.471575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.471598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.471823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.471930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.471954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.471971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.472024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.481303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.481488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.481525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.481545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.482472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.483158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.483199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.483222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.483321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.491881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.492056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.492093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.492151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.492192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.916 [2024-10-01 13:52:33.492230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.916 [2024-10-01 13:52:33.492248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.916 [2024-10-01 13:52:33.492264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.916 [2024-10-01 13:52:33.492300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.916 [2024-10-01 13:52:33.502021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.916 [2024-10-01 13:52:33.502208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.916 [2024-10-01 13:52:33.502244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.916 [2024-10-01 13:52:33.502263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.916 [2024-10-01 13:52:33.502302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.502339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.502357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.502374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.502417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.513397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.513571] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.513607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.513627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.513668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.513705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.513723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.513740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.513784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.525664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.526020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.526059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.526080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.526131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.526172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.526229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.526255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.526293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.536998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.537196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.537232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.537252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.537293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.537330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.537349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.537367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.538327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.547728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.548800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.548846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.548868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.549505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.549643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.549682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.549703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.549743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.557843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.558016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.558052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.558071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.558111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.558149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.558167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.558183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.558219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.568556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.568734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.568771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.568790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.568829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.568866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.568884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.568900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.569842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.578690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.579634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.579682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.579705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.579902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.580939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.580978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.581000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.581606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.588820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.588986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.589023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.589042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.589796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.590021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.590058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.590078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.917 [2024-10-01 13:52:33.590125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.917 [2024-10-01 13:52:33.598944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.917 [2024-10-01 13:52:33.599101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.917 [2024-10-01 13:52:33.599137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.917 [2024-10-01 13:52:33.599157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.917 [2024-10-01 13:52:33.599240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.917 [2024-10-01 13:52:33.599279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.917 [2024-10-01 13:52:33.599297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.917 [2024-10-01 13:52:33.599312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.599348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.609633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.609807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.609844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.609863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.609903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.609960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.609980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.609996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.610032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.619763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.619933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.619968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.619988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.620028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.620065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.620084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.620100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.620135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.630442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.630611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.630650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.630670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.630718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.630756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.630774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.630825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.630864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.640632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.640785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.640821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.640841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.640880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.640934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.640956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.640972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.641008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.651580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.651743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.651780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.651799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.651839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.651877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.651895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.651929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.651971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.661702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.661859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.661896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.661930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.661974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.662011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.662030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.662046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.662082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.672527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.672737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.672774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.672794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.672834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.672871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.672888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.672904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.672960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.683327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.683509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.683545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.683564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.683604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.684545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.684584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.684605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.685250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.693861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.918 [2024-10-01 13:52:33.694025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.918 [2024-10-01 13:52:33.694060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.918 [2024-10-01 13:52:33.694079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.918 [2024-10-01 13:52:33.694118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.918 [2024-10-01 13:52:33.694156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.918 [2024-10-01 13:52:33.694173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.918 [2024-10-01 13:52:33.694189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.918 [2024-10-01 13:52:33.694225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.918 [2024-10-01 13:52:33.704430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.918 [2024-10-01 13:52:33.704501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.918 [2024-10-01 13:52:33.704535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.918 [2024-10-01 13:52:33.704553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.918 [2024-10-01 13:52:33.704606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.918 [2024-10-01 13:52:33.704623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.918 [2024-10-01 13:52:33.704640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.918 [2024-10-01 13:52:33.704655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.918 [2024-10-01 13:52:33.704672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.704687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.704718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.704749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.704781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.704812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.704844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.704875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.704906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.704958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.704975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.919 [2024-10-01 13:52:33.705585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.705981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.705996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.919 [2024-10-01 13:52:33.706013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.919 [2024-10-01 13:52:33.706028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.706654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.706979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.706996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.707012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.707042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.707074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.707105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.707145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.707177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.920 [2024-10-01 13:52:33.707209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.707241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.707272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.707303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.707335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.920 [2024-10-01 13:52:33.707351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.920 [2024-10-01 13:52:33.707367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.921 [2024-10-01 13:52:33.707411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.921 [2024-10-01 13:52:33.707442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.921 [2024-10-01 13:52:33.707474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.707970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.707991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.921 [2024-10-01 13:52:33.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d0020 is same with the state(6) to be set 00:18:34.921 [2024-10-01 13:52:33.708304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53584 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53912 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53920 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53928 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53936 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53944 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53952 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.921 [2024-10-01 13:52:33.708690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.921 [2024-10-01 13:52:33.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53960 len:8 PRP1 0x0 PRP2 0x0 00:18:34.921 [2024-10-01 13:52:33.708714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.921 [2024-10-01 13:52:33.708729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.708740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.708751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53968 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.708791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.708803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.708814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53976 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.708829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.708851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.708863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53984 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.708888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.708909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.708937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.708948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53992 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.708963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.708978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.708989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.709000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54000 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.709014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.709039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.709050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54008 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.709065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.709090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.709101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54016 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.709115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.709140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.709150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54024 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.709164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.922 [2024-10-01 13:52:33.709189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.922 [2024-10-01 13:52:33.709200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54032 len:8 PRP1 0x0 PRP2 0x0 00:18:34.922 [2024-10-01 13:52:33.709214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709310] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9d0020 was disconnected and freed. reset controller. 00:18:34.922 [2024-10-01 13:52:33.709415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.922 [2024-10-01 13:52:33.709443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.922 [2024-10-01 13:52:33.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.922 [2024-10-01 13:52:33.709521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.922 [2024-10-01 13:52:33.709557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.922 [2024-10-01 13:52:33.709572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.922 [2024-10-01 13:52:33.710711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.922 [2024-10-01 13:52:33.710762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.922 [2024-10-01 13:52:33.711006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.922 [2024-10-01 13:52:33.711278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.922 [2024-10-01 13:52:33.711312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.922 [2024-10-01 13:52:33.711331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.922 [2024-10-01 13:52:33.711391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.922 [2024-10-01 13:52:33.711415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.922 [2024-10-01 13:52:33.711432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.922 [2024-10-01 13:52:33.711465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.922 [2024-10-01 13:52:33.711489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.922 [2024-10-01 13:52:33.711516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.922 [2024-10-01 13:52:33.711534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.922 [2024-10-01 13:52:33.711550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.922 [2024-10-01 13:52:33.711567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.922 [2024-10-01 13:52:33.711583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.922 [2024-10-01 13:52:33.711596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.922 [2024-10-01 13:52:33.711628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.922 [2024-10-01 13:52:33.711647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.922 [2024-10-01 13:52:33.721165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.922 [2024-10-01 13:52:33.721224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.922 [2024-10-01 13:52:33.722097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.922 [2024-10-01 13:52:33.722158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.922 [2024-10-01 13:52:33.722180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.922 [2024-10-01 13:52:33.722235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.922 [2024-10-01 13:52:33.722260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.922 [2024-10-01 13:52:33.722276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.922 [2024-10-01 13:52:33.722501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.922 [2024-10-01 13:52:33.722535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.922 [2024-10-01 13:52:33.722625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.922 [2024-10-01 13:52:33.722651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.922 [2024-10-01 13:52:33.722668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.922 [2024-10-01 13:52:33.722686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.922 [2024-10-01 13:52:33.722701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.922 [2024-10-01 13:52:33.722716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.922 [2024-10-01 13:52:33.722769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.922 [2024-10-01 13:52:33.722793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.731561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.731624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.731743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.731776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.923 [2024-10-01 13:52:33.731795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.731846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.731870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.923 [2024-10-01 13:52:33.731887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.731936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.731965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.731992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.732010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.732025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.732042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.732057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.732093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.732127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.732145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.742152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.742223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.742338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.742371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.923 [2024-10-01 13:52:33.742399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.742452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.742476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.923 [2024-10-01 13:52:33.742492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.742524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.742564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.742594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.742612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.742628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.742646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.742660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.742674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.742707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.742726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.752698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.752763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.752871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.752903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.923 [2024-10-01 13:52:33.752939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.752994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.753020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.923 [2024-10-01 13:52:33.753035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.753069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.753128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.753158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.753176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.753192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.753210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.753224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.753238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.754467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.754505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.763834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.763906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.764061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.764097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.923 [2024-10-01 13:52:33.764116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.764169] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.764193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.923 [2024-10-01 13:52:33.764210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.764247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.764272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.764299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.764317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.764334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.764352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.764367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.764381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.764412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.764431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.775941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.776016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.776191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.776227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.923 [2024-10-01 13:52:33.776285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.776342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.923 [2024-10-01 13:52:33.776368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.923 [2024-10-01 13:52:33.776385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.923 [2024-10-01 13:52:33.776424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.776449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.923 [2024-10-01 13:52:33.776494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.776515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.776533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.776551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.923 [2024-10-01 13:52:33.776567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.923 [2024-10-01 13:52:33.776580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.923 [2024-10-01 13:52:33.776612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.776630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.923 [2024-10-01 13:52:33.786406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.786476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.923 [2024-10-01 13:52:33.786605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.786640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.924 [2024-10-01 13:52:33.786659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.786711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.786735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.924 [2024-10-01 13:52:33.786762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.786798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.786823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.786849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.786866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.786882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.786898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.786929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.786946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.788212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.788251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.797539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.797606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.797729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.797762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.924 [2024-10-01 13:52:33.797782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.797834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.797859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.924 [2024-10-01 13:52:33.797875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.797928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.797956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.797985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.798003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.798019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.798037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.798052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.798066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.798097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.798115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.809515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.809598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.809784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.809820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.924 [2024-10-01 13:52:33.809841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.809893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.809931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.924 [2024-10-01 13:52:33.809952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.809989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.810014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.810104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.810128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.810145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.810164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.810179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.810193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.810233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.810253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.820017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.820100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.820220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.820265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.924 [2024-10-01 13:52:33.820284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.820335] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.924 [2024-10-01 13:52:33.820359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.924 [2024-10-01 13:52:33.820375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.924 [2024-10-01 13:52:33.820411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.820435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.924 [2024-10-01 13:52:33.820462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.820479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.820495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.820513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.924 [2024-10-01 13:52:33.820528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.924 [2024-10-01 13:52:33.820542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.924 [2024-10-01 13:52:33.821776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.821816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.924 [2024-10-01 13:52:33.831222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.831290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.924 [2024-10-01 13:52:33.831411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.831445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.925 [2024-10-01 13:52:33.831464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.831561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.831587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.925 [2024-10-01 13:52:33.831605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.831640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.831665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.831697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.831715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.831732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.831749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.831765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.831778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.831809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.831827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.843146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.843218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.843376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.843410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.925 [2024-10-01 13:52:33.843429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.843489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.843514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.925 [2024-10-01 13:52:33.843531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.843566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.843591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.843618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.843637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.843652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.843670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.843685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.843699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.843748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.843799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.853453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.853521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.853638] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.853671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.925 [2024-10-01 13:52:33.853690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.853741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.853773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.925 [2024-10-01 13:52:33.853789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.853823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.853847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.853874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.853892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.853907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.853942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.853958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.853971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.855233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.855272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.864599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.864661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.864792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.864825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.925 [2024-10-01 13:52:33.864844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.864895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.864935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.925 [2024-10-01 13:52:33.864956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.864993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.865017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.925 8111.20 IOPS, 31.68 MiB/s [2024-10-01 13:52:33.866933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.867000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.867021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.867040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.867056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.867069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.867236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.867262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.925 [2024-10-01 13:52:33.876447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.876510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.925 [2024-10-01 13:52:33.876674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.876708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.925 [2024-10-01 13:52:33.876727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.876778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.925 [2024-10-01 13:52:33.876802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.925 [2024-10-01 13:52:33.876819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.925 [2024-10-01 13:52:33.876855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.876880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.925 [2024-10-01 13:52:33.876907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.876943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.876959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.876977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.925 [2024-10-01 13:52:33.876992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.925 [2024-10-01 13:52:33.877006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.925 [2024-10-01 13:52:33.877038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.877056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.886906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.886983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.887097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.887130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.926 [2024-10-01 13:52:33.887149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.887200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.887257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.926 [2024-10-01 13:52:33.887276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.887311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.887345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.887371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.887388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.887403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.887419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.887434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.887447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.888678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.888718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.898191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.898249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.898372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.898404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.926 [2024-10-01 13:52:33.898424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.898476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.898499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.926 [2024-10-01 13:52:33.898515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.898569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.898597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.898625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.898642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.898658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.898676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.898691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.898704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.898738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.898757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.910356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.910434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.910620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.910659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.926 [2024-10-01 13:52:33.910678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.910730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.910755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.926 [2024-10-01 13:52:33.910771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.910807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.910832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.910878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.910901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.910946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.910967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.910983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.910997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.911029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.911047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.920860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.920947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.921070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.921103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.926 [2024-10-01 13:52:33.921122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.921175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.921199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.926 [2024-10-01 13:52:33.921216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.921250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.921274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.921301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.921319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.921380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.921399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.921414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.921427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.926 [2024-10-01 13:52:33.922681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.922721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.926 [2024-10-01 13:52:33.931899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.931974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.926 [2024-10-01 13:52:33.932093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.932126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.926 [2024-10-01 13:52:33.932145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.932196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.926 [2024-10-01 13:52:33.932221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.926 [2024-10-01 13:52:33.932251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.926 [2024-10-01 13:52:33.932287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.932311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.926 [2024-10-01 13:52:33.932338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.926 [2024-10-01 13:52:33.932356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.926 [2024-10-01 13:52:33.932373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.932391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.932406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.932420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.932457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.932475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.942062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.942162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.942268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.942300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.927 [2024-10-01 13:52:33.942318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.943618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.943665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.927 [2024-10-01 13:52:33.943731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.943754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.944641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.944684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.944704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.944721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.944858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.944884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.944900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.944930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.944984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.953339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.953421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.953541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.953574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.927 [2024-10-01 13:52:33.953593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.953645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.953670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.927 [2024-10-01 13:52:33.953686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.954664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.954712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.954957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.954986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.955012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.955031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.955047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.955061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.955141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.955163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.964223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.964314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.965344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.965392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.927 [2024-10-01 13:52:33.965415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.965470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.965495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.927 [2024-10-01 13:52:33.965512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.966149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.966194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.966306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.966333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.966349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.966368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.966383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.966397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.966429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.966448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.974594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.974660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.974776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.974808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.927 [2024-10-01 13:52:33.974826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.974884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.974909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.927 [2024-10-01 13:52:33.974943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.974978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.975003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.975029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.975047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.975062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.975105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.927 [2024-10-01 13:52:33.975122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.927 [2024-10-01 13:52:33.975136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.927 [2024-10-01 13:52:33.975167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.975185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.927 [2024-10-01 13:52:33.984748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.984858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.927 [2024-10-01 13:52:33.984975] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.985014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.927 [2024-10-01 13:52:33.985033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.985109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.927 [2024-10-01 13:52:33.985137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.927 [2024-10-01 13:52:33.985154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.927 [2024-10-01 13:52:33.985174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.985208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.927 [2024-10-01 13:52:33.985228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:33.985242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:33.985257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:33.985288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:33.985306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:33.985321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:33.985335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:33.986623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:33.995022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:33.995078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:33.995186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:33.995219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.928 [2024-10-01 13:52:33.995237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:33.995289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:33.995314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.928 [2024-10-01 13:52:33.995331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:33.995402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:33.995428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:33.995468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:33.995488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:33.995503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:33.995521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:33.995537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:33.995551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:33.995599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:33.995621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:34.005395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.005472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.005582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.005614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.928 [2024-10-01 13:52:34.005632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.005683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.005708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.928 [2024-10-01 13:52:34.005725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.005774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.005807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.005835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:34.005852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:34.005868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:34.005885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:34.005901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:34.005929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:34.007175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:34.007216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:34.016633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.016692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.016839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.016873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.928 [2024-10-01 13:52:34.016892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.016967] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.016993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.928 [2024-10-01 13:52:34.017009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.017043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.017068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.017095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:34.017112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:34.017127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:34.017145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:34.017160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:34.017178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:34.017208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:34.017226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:34.028691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.028755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.028900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.028950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.928 [2024-10-01 13:52:34.028971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.029028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.029053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.928 [2024-10-01 13:52:34.029069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.029105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.029130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.029157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:34.029175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:34.029191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:34.029208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:34.029224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:34.029268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:34.029303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:34.029322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.928 [2024-10-01 13:52:34.039184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.039248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.928 [2024-10-01 13:52:34.039361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.039394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.928 [2024-10-01 13:52:34.039412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.039465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.928 [2024-10-01 13:52:34.039488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.928 [2024-10-01 13:52:34.039505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.928 [2024-10-01 13:52:34.039541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.039564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.928 [2024-10-01 13:52:34.039591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.928 [2024-10-01 13:52:34.039609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.928 [2024-10-01 13:52:34.039624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.928 [2024-10-01 13:52:34.039642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.039657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.039671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.040894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.040944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.050329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.050391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.050505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.050549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.929 [2024-10-01 13:52:34.050571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.050627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.050652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.929 [2024-10-01 13:52:34.050669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.050704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.050761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.050791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.050809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.050824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.050842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.050857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.050871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.050900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.050934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.062287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.062361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.062519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.062569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.929 [2024-10-01 13:52:34.062590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.062647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.062672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.929 [2024-10-01 13:52:34.062689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.062725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.062750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.062777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.062795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.062812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.062831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.062846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.062860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.062927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.062961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.072764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.072827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.072954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.072987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.929 [2024-10-01 13:52:34.073042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.073099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.073134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.929 [2024-10-01 13:52:34.073150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.073186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.073211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.073238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.073256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.073271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.073288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.073304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.073318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.074554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.074592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.083994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.084057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.084170] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.084207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.929 [2024-10-01 13:52:34.084226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.084289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.084313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.929 [2024-10-01 13:52:34.084330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.084364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.084387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.084415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.084433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.084449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.084467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.084482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.084522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.084557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.084575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.096049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.096118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.929 [2024-10-01 13:52:34.096263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.096297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.929 [2024-10-01 13:52:34.096316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.096367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.929 [2024-10-01 13:52:34.096392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.929 [2024-10-01 13:52:34.096409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.929 [2024-10-01 13:52:34.096443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.096467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.929 [2024-10-01 13:52:34.096507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.096527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.096544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.096561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.929 [2024-10-01 13:52:34.096577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.929 [2024-10-01 13:52:34.096591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.929 [2024-10-01 13:52:34.096636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.929 [2024-10-01 13:52:34.096660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.106620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.106678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.106792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.106825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.930 [2024-10-01 13:52:34.106843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.106895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.106934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.930 [2024-10-01 13:52:34.106955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.106991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.107015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.107085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.107105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.107121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.107139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.107154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.107167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.108388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.108427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.117855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.117926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.118039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.118071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.930 [2024-10-01 13:52:34.118091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.118141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.118165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.930 [2024-10-01 13:52:34.118181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.118216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.118240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.118268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.118286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.118302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.118319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.118334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.118347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.118378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.118396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.129892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.129968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.130114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.130148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.930 [2024-10-01 13:52:34.130167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.130258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.130284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.930 [2024-10-01 13:52:34.130300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.130340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.130365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.130391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.130409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.130424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.130442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.130457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.130471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.130522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.130556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.140431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.140495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.140607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.140639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.930 [2024-10-01 13:52:34.140658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.140711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.140735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.930 [2024-10-01 13:52:34.140752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.140787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.140811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.140838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.140856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.140871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.140890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.140905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.140939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.142163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.142223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.151622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.151686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.151803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.151836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.930 [2024-10-01 13:52:34.151855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.151908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.151949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.930 [2024-10-01 13:52:34.151966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.930 [2024-10-01 13:52:34.152001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.152025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.930 [2024-10-01 13:52:34.152051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.152068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.152084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.152102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.930 [2024-10-01 13:52:34.152117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.930 [2024-10-01 13:52:34.152130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.930 [2024-10-01 13:52:34.152161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.152180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.930 [2024-10-01 13:52:34.163645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.163730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.930 [2024-10-01 13:52:34.163896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.930 [2024-10-01 13:52:34.163945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.931 [2024-10-01 13:52:34.163966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.164020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.164045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.931 [2024-10-01 13:52:34.164062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.164098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.164123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.164150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.164207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.164224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.164243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.164258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.164272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.164323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.164345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.174236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.174303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.174415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.174448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.931 [2024-10-01 13:52:34.174467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.174518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.174556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.931 [2024-10-01 13:52:34.174576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.174611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.174636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.174663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.174680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.174697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.174724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.174739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.174753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.176001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.176039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.185486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.185552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.185679] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.185712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.931 [2024-10-01 13:52:34.185731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.185783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.185839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.931 [2024-10-01 13:52:34.185857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.185894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.185938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.185969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.185987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.186003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.186021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.186036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.186050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.186091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.186110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.197646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.197708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.197868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.197902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.931 [2024-10-01 13:52:34.197947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.198004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.198029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.931 [2024-10-01 13:52:34.198046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.198116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.198142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.198182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.198203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.198219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.198237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.198252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.198266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.198315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.198336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.931 [2024-10-01 13:52:34.208552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.208620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.931 [2024-10-01 13:52:34.208733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.208776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.931 [2024-10-01 13:52:34.208795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.208846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.931 [2024-10-01 13:52:34.208871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.931 [2024-10-01 13:52:34.208887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.931 [2024-10-01 13:52:34.208935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.208963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.931 [2024-10-01 13:52:34.208991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.209008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.931 [2024-10-01 13:52:34.209024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.931 [2024-10-01 13:52:34.209042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.931 [2024-10-01 13:52:34.209057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.209070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.210292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.210330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.219763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.219828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.219958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.219992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.932 [2024-10-01 13:52:34.220011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.220064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.220088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.932 [2024-10-01 13:52:34.220105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.220140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.220165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.220191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.220209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.220256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.220275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.220290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.220304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.220336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.220354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.231695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.231764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.231945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.231980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.932 [2024-10-01 13:52:34.231999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.232051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.232076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.932 [2024-10-01 13:52:34.232092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.232129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.232153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.232180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.232198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.232214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.232232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.232247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.232262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.232292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.232311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.242149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.242210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.242318] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.242350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.932 [2024-10-01 13:52:34.242369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.242420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.242444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.932 [2024-10-01 13:52:34.242496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.242532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.242571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.242601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.242619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.242634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.242651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.242666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.242679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.243897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.243948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.253446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.253509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.253622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.253654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.932 [2024-10-01 13:52:34.253674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.253725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.253749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.932 [2024-10-01 13:52:34.253765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.253800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.253824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.253851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.253869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.253884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.253902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.253936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.253952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.253984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.254003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.265524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.265627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.932 [2024-10-01 13:52:34.265798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.265833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.932 [2024-10-01 13:52:34.265852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.265903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.932 [2024-10-01 13:52:34.265957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.932 [2024-10-01 13:52:34.265975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.932 [2024-10-01 13:52:34.266011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.266035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.932 [2024-10-01 13:52:34.266063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.266081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.266096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.266114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.932 [2024-10-01 13:52:34.266129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.932 [2024-10-01 13:52:34.266143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.932 [2024-10-01 13:52:34.266173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.266192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.932 [2024-10-01 13:52:34.276030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.276088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.276194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.276226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-10-01 13:52:34.276244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.276295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.276319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.933 [2024-10-01 13:52:34.276336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.276369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.276393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.276420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.276438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.276453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.276495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.276513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.276526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.277754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.277793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.287181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.287237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.287346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.287379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.933 [2024-10-01 13:52:34.287398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.287449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.287473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-10-01 13:52:34.287489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.287523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.287547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.287573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.287590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.287605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.287622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.287637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.287651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.287680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.287699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.299237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.299323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.299481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.299516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-10-01 13:52:34.299535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.299587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.299612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.933 [2024-10-01 13:52:34.299629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.299701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.299728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.299756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.299773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.299789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.299807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.299823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.299837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.299867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.299885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.309778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.309848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.309979] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.310013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.933 [2024-10-01 13:52:34.310033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.310084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.310120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-10-01 13:52:34.310137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.310172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.310197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.310223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.310241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.310257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.310275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.310290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.310305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.311547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.311585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.321033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.321097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.321249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.321284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-10-01 13:52:34.321303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.321354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.321379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.933 [2024-10-01 13:52:34.321399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.933 [2024-10-01 13:52:34.321433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.321457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.933 [2024-10-01 13:52:34.321484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.321501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.321516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.321533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.933 [2024-10-01 13:52:34.321548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.933 [2024-10-01 13:52:34.321562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.933 [2024-10-01 13:52:34.321592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.321610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.933 [2024-10-01 13:52:34.333267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.333345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.933 [2024-10-01 13:52:34.333505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-10-01 13:52:34.333540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.934 [2024-10-01 13:52:34.333560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.333611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.333636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-10-01 13:52:34.333653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.333688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.333713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.333746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.333764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.333780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.333798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.333843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.333859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.333891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.333958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.343805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.343865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.343992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.344026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-10-01 13:52:34.344046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.344097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.344121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.934 [2024-10-01 13:52:34.344137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.344172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.344196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.344222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.344240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.344255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.344273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.344288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.344302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.345527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.345565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.355042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.355103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.355216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.355249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.934 [2024-10-01 13:52:34.355268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.355319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.355343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-10-01 13:52:34.355359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.355394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.355453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.355483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.355501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.355517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.355534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.355549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.355563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.355594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.355612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.367116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.367189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.367343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.367376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-10-01 13:52:34.367395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.367447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.367471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.934 [2024-10-01 13:52:34.367487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.367523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.367547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.367575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.367593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.367608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.367627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.367642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.367656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.367687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.367706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.377506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.377568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.377676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.377707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.934 [2024-10-01 13:52:34.377758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.377815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-10-01 13:52:34.377840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-10-01 13:52:34.377857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.934 [2024-10-01 13:52:34.377891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.377933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.934 [2024-10-01 13:52:34.377966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.377983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.378002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.378020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.934 [2024-10-01 13:52:34.378034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.934 [2024-10-01 13:52:34.378048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.934 [2024-10-01 13:52:34.379282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.379322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.934 [2024-10-01 13:52:34.388765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.934 [2024-10-01 13:52:34.388825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.388950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.388983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.935 [2024-10-01 13:52:34.389002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.389061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.389085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.935 [2024-10-01 13:52:34.389101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.389137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.389161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.389187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.389205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.389220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.389237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.389252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.389298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.389334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.389353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.400902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.400990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.401147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.401183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.935 [2024-10-01 13:52:34.401202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.401254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.401279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.935 [2024-10-01 13:52:34.401296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.401332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.401356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.401382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.401401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.401417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.401435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.401450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.401464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.401495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.401513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.411505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.411557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.411675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.411718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.935 [2024-10-01 13:52:34.411736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.411789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.411813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.935 [2024-10-01 13:52:34.411829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.411862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.411886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.411971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.411992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.412006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.412023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.412039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.412053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.413251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.413289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.422939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.422989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.423088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.423126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.935 [2024-10-01 13:52:34.423143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.423193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.423218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.935 [2024-10-01 13:52:34.423233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.423265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.423289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.935 [2024-10-01 13:52:34.423316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.423333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.423347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.423363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.935 [2024-10-01 13:52:34.423379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.935 [2024-10-01 13:52:34.423392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.935 [2024-10-01 13:52:34.423421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.423439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.935 [2024-10-01 13:52:34.433071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.433128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.935 [2024-10-01 13:52:34.434374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.434427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.935 [2024-10-01 13:52:34.434448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.935 [2024-10-01 13:52:34.434525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.935 [2024-10-01 13:52:34.434566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.936 [2024-10-01 13:52:34.434584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.435444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.435490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.435636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.435663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.435678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.435697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.435712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.435725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.435757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.435775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.444442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.444491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.444598] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.444629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.936 [2024-10-01 13:52:34.444647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.444696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.444720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.936 [2024-10-01 13:52:34.444736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.444769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.444792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.444818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.444836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.444850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.444867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.444881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.444894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.444947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.444983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.454592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.454641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.454737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.454769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.936 [2024-10-01 13:52:34.454786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.454837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.454860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.936 [2024-10-01 13:52:34.454876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.454909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.454952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.455726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.455764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.455783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.455801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.455816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.455830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.456038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.456064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.465286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.465335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.465434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.465466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.936 [2024-10-01 13:52:34.465483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.465539] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.465563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.936 [2024-10-01 13:52:34.465579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.466335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.466378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.466613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.466660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.466677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.466695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.466709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.466724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.466764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.466785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.936 [2024-10-01 13:52:34.475417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.475465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.936 [2024-10-01 13:52:34.475561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.475592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.936 [2024-10-01 13:52:34.475609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.475658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.936 [2024-10-01 13:52:34.475682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.936 [2024-10-01 13:52:34.475698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.936 [2024-10-01 13:52:34.475973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.476007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.936 [2024-10-01 13:52:34.476154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.936 [2024-10-01 13:52:34.476186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.936 [2024-10-01 13:52:34.476201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.936 [2024-10-01 13:52:34.476219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.476234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.476247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.476356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.476376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.487033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.487145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.487339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.487376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.937 [2024-10-01 13:52:34.487396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.487449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.487508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.937 [2024-10-01 13:52:34.487527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.487566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.487591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.487637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.487659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.487677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.487696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.487711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.487725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.487756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.487778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.497797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.497892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.498043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.498078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.937 [2024-10-01 13:52:34.498098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.498152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.498176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.937 [2024-10-01 13:52:34.498193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.498229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.498254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.498282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.498300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.498318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.498336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.498351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.498365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.499642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.499687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.509001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.509101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.509252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.509287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.937 [2024-10-01 13:52:34.509307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.509373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.509397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.937 [2024-10-01 13:52:34.509414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.509450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.509475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.509502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.509520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.509537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.509555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.509571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.509585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.509616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.509636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.937 [2024-10-01 13:52:34.521055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.521170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.937 [2024-10-01 13:52:34.521333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.521370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.937 [2024-10-01 13:52:34.521390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.521443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.937 [2024-10-01 13:52:34.521467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.937 [2024-10-01 13:52:34.521483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.937 [2024-10-01 13:52:34.521521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.521546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.937 [2024-10-01 13:52:34.521595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.521617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.521677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.521698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.937 [2024-10-01 13:52:34.521713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.937 [2024-10-01 13:52:34.521727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.937 [2024-10-01 13:52:34.521759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.521778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.531541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.531644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.531797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.531832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.938 [2024-10-01 13:52:34.531853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.531905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.531946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.938 [2024-10-01 13:52:34.531964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.532001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.532026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.533290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.533330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.533352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.533373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.533389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.533403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.533601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.533628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.542931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.543030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.543184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.543221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.938 [2024-10-01 13:52:34.543241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.543304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.543328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.938 [2024-10-01 13:52:34.543381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.543420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.543446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.543472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.543490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.543507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.543525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.543541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.543554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.543585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.543604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.555079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.555154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.555319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.555355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.938 [2024-10-01 13:52:34.555375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.555428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.555453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.938 [2024-10-01 13:52:34.555469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.555506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.555532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.555578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.555600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.555617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.555635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.555651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.555664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.555696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.555714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.565682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.565794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.565939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.565974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.938 [2024-10-01 13:52:34.565994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.566048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.566072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.938 [2024-10-01 13:52:34.566088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.566125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.566150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.566183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.566201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.566217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.566235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.566250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.566263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.567541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.567580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.577021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.577118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.938 [2024-10-01 13:52:34.577266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.577312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.938 [2024-10-01 13:52:34.577332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.577387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.938 [2024-10-01 13:52:34.577412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.938 [2024-10-01 13:52:34.577436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.938 [2024-10-01 13:52:34.577473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.577499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.938 [2024-10-01 13:52:34.577526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.577544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.577561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.577613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.938 [2024-10-01 13:52:34.577631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.938 [2024-10-01 13:52:34.577645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.938 [2024-10-01 13:52:34.577678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.938 [2024-10-01 13:52:34.577698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.589256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.589367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.589575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.589613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.939 [2024-10-01 13:52:34.589634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.589688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.589713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.939 [2024-10-01 13:52:34.589729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.589768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.589794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.589840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.589862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.589881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.589905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.589939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.589955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.589990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.590009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.599733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.599831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.600004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.600041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.939 [2024-10-01 13:52:34.600061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.600114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.600138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.939 [2024-10-01 13:52:34.600155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.601463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.601511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.601711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.601737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.601755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.601774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.601790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.601804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.602588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.602626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.611014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.611111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.611273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.611309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.939 [2024-10-01 13:52:34.611329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.611382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.611407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.939 [2024-10-01 13:52:34.611424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.611470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.611495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.611523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.611541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.611559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.611577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.611592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.611607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.611638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.611657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.623081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.623172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.623404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.623440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.939 [2024-10-01 13:52:34.623460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.623513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.623536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.939 [2024-10-01 13:52:34.623552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.939 [2024-10-01 13:52:34.623589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.623613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.939 [2024-10-01 13:52:34.623661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.623684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.623701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.623720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.939 [2024-10-01 13:52:34.623735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.939 [2024-10-01 13:52:34.623749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.939 [2024-10-01 13:52:34.623780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.623799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.939 [2024-10-01 13:52:34.633575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.633629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.939 [2024-10-01 13:52:34.633732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.939 [2024-10-01 13:52:34.633773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.940 [2024-10-01 13:52:34.633791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.633841] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.633865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.940 [2024-10-01 13:52:34.633881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.633928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.633956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.633984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.634002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.634017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.634035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.634073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.634089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.634121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.634140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.644738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.644792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.644898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.644944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.940 [2024-10-01 13:52:34.644963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.645016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.645041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.940 [2024-10-01 13:52:34.645057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.645091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.645114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.645140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.645157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.645172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.645189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.645204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.645217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.645247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.645265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.656752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.656863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.657076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.657113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.940 [2024-10-01 13:52:34.657135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.657188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.657212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.940 [2024-10-01 13:52:34.657229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.657266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.657328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.657379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.657401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.657419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.657438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.657453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.657467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.657499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.657528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.667224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.667277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.667383] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.667416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.940 [2024-10-01 13:52:34.667434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.667485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.667509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.940 [2024-10-01 13:52:34.667525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.667559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.667583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.667610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.667627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.667642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.667660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.667675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.667688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.667718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.667737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.678378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.678429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.678530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.678575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.940 [2024-10-01 13:52:34.678624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.678680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.940 [2024-10-01 13:52:34.678705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.940 [2024-10-01 13:52:34.678721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.940 [2024-10-01 13:52:34.678756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.678780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.940 [2024-10-01 13:52:34.678806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.678823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.678837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.678855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.940 [2024-10-01 13:52:34.678869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.940 [2024-10-01 13:52:34.678882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.940 [2024-10-01 13:52:34.678930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.678951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.940 [2024-10-01 13:52:34.690398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.690455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.940 [2024-10-01 13:52:34.690609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.690642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.941 [2024-10-01 13:52:34.690660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.690711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.690735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.941 [2024-10-01 13:52:34.690760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.690794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.690818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.690845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.690862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.690884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.690902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.690936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.690975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.691024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.691045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.701019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.701134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.701277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.701313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.941 [2024-10-01 13:52:34.701334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.701387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.701412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.941 [2024-10-01 13:52:34.701428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.702722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.702770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.702993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.703022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.703040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.703060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.703076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.703090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.703886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.703934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.712548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.712623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.712792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.712837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.941 [2024-10-01 13:52:34.712856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.712908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.712932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.941 [2024-10-01 13:52:34.712948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.712998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.713023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.713086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.713106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.713121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.713139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.713154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.713167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.713198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.713216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.722725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.722815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.722948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.722981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.941 [2024-10-01 13:52:34.722999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.724294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.724344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.941 [2024-10-01 13:52:34.724365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.724387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.725259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.941 [2024-10-01 13:52:34.725311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.725331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.725347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.725469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.725495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.941 [2024-10-01 13:52:34.725510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.941 [2024-10-01 13:52:34.725525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.941 [2024-10-01 13:52:34.725577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.941 [2024-10-01 13:52:34.733958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.734013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.941 [2024-10-01 13:52:34.734139] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.734172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.941 [2024-10-01 13:52:34.734229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.734286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.941 [2024-10-01 13:52:34.734311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.941 [2024-10-01 13:52:34.734327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.941 [2024-10-01 13:52:34.734362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.734395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.734422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.734440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.734455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.734473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.734488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.734503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.735490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.735534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.744093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.744170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.744262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.744294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.942 [2024-10-01 13:52:34.744312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.744381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.744408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.942 [2024-10-01 13:52:34.744425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.744444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.745254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.745296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.745315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.745331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.745529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.745555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.745570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.745609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.746636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.755061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.755110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.755208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.755247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.942 [2024-10-01 13:52:34.755273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.755324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.755348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.942 [2024-10-01 13:52:34.755364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.756139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.756183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.756381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.756407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.756422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.756441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.756457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.756470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.756509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.756529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.765184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.765257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.765339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.765369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.942 [2024-10-01 13:52:34.765386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.765690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.765733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.942 [2024-10-01 13:52:34.765753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.765773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.765934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.765964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.766003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.766018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.766131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.766152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.942 [2024-10-01 13:52:34.766166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.942 [2024-10-01 13:52:34.766180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.942 [2024-10-01 13:52:34.766215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.942 [2024-10-01 13:52:34.776248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.776365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.942 [2024-10-01 13:52:34.776451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.776481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.942 [2024-10-01 13:52:34.776498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.776566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.942 [2024-10-01 13:52:34.776594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.942 [2024-10-01 13:52:34.776611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.942 [2024-10-01 13:52:34.776630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.942 [2024-10-01 13:52:34.776663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.776683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.776697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.776711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.776742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.776761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.776775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.776788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.776833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.786472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.786570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.786658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.786688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.943 [2024-10-01 13:52:34.786706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.786795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.786824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.943 [2024-10-01 13:52:34.786841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.786860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.786893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.786928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.786947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.786961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.788160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.788199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.788218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.788233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.789174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.797199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.797249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.797363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.797395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.943 [2024-10-01 13:52:34.797413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.797463] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.797486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.943 [2024-10-01 13:52:34.797502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.797535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.797558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.797584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.797601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.797615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.797632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.797646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.797660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.797689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.797707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.808725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.808823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.808971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.809008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.943 [2024-10-01 13:52:34.809027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.809081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.809114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.943 [2024-10-01 13:52:34.809131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.809168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.809193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.809220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.809238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.809255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.809274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.809289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.809303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.809333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.809352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.818947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.819082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.819211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.819247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.943 [2024-10-01 13:52:34.819266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.820578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.820623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.943 [2024-10-01 13:52:34.820646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.820669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.821577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.821621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.821642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.821703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.821985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.822021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.822037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.822052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.943 [2024-10-01 13:52:34.822085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.943 [2024-10-01 13:52:34.829697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.829793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.943 [2024-10-01 13:52:34.829973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.830011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.943 [2024-10-01 13:52:34.830034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.830087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.943 [2024-10-01 13:52:34.830111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.943 [2024-10-01 13:52:34.830127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.943 [2024-10-01 13:52:34.830164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.830190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.943 [2024-10-01 13:52:34.830217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.943 [2024-10-01 13:52:34.830236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.943 [2024-10-01 13:52:34.830253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.830271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.830287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.830301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.830332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.830352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.841572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.841681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.841845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.841882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.944 [2024-10-01 13:52:34.841903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.841977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.842003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.944 [2024-10-01 13:52:34.842060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.842101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.842127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.842177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.842200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.842217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.842237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.842252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.842266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.842297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.842316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.851815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.851897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.852038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.852072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.944 [2024-10-01 13:52:34.852093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.852144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.852169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.944 [2024-10-01 13:52:34.852186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.852226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.852252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.852279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.852296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.852313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.852333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.852349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.852363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.853593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.853631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.862970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.863050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.863156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.863188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.944 [2024-10-01 13:52:34.863206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.863256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.863280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.944 [2024-10-01 13:52:34.863296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.863330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.863353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.863380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.863397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.863411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.863429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.863444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.863457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.863487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.863505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 8018.00 IOPS, 31.32 MiB/s [2024-10-01 13:52:34.875222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.875324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.875535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.875573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.944 [2024-10-01 13:52:34.875593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.875646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.875671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.944 [2024-10-01 13:52:34.875688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.875725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.875752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.875779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.875797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.875815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.875870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.875888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.875902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.875956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.875978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.944 [2024-10-01 13:52:34.885785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.885905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.944 [2024-10-01 13:52:34.886072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.886108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.944 [2024-10-01 13:52:34.886128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.886182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.944 [2024-10-01 13:52:34.886206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.944 [2024-10-01 13:52:34.886223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.944 [2024-10-01 13:52:34.886260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.886285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.944 [2024-10-01 13:52:34.887570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.944 [2024-10-01 13:52:34.887612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.944 [2024-10-01 13:52:34.887634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.944 [2024-10-01 13:52:34.887654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.887670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.887684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.887890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.887934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.897089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.897187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.897339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.897374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.945 [2024-10-01 13:52:34.897395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.897448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.897473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.945 [2024-10-01 13:52:34.897520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.897561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.897587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.897614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.897632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.897649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.897667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.897683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.897696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.897733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.897752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.908995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.909109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.909345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.909385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.945 [2024-10-01 13:52:34.909406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.909460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.909484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.945 [2024-10-01 13:52:34.909501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.909540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.909566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.909616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.909639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.909657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.909677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.909693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.909707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.909739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.909758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.919650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.919705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.919849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.919883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.945 [2024-10-01 13:52:34.919901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.919971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.919997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.945 [2024-10-01 13:52:34.920014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.920050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.920074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.920101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.920119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.920134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.920152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.920166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.920180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.920211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.920230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.931241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.931320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.931446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.931481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.945 [2024-10-01 13:52:34.931501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.931554] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.931579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.945 [2024-10-01 13:52:34.931595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.931630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.931654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.945 [2024-10-01 13:52:34.931681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.931698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.931715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.931733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.945 [2024-10-01 13:52:34.931776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.945 [2024-10-01 13:52:34.931791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.945 [2024-10-01 13:52:34.931834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.931852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.945 [2024-10-01 13:52:34.943449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.943566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.945 [2024-10-01 13:52:34.943775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.943813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.945 [2024-10-01 13:52:34.943833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.945 [2024-10-01 13:52:34.943888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.945 [2024-10-01 13:52:34.943930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.945 [2024-10-01 13:52:34.943951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.943991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.944016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.944044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.944068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.944086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.944105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.944120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.944135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.944167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.944185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.954033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.954142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.954290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.954326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.946 [2024-10-01 13:52:34.954346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.954400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.954424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.946 [2024-10-01 13:52:34.954441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.955754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.955802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.956025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.956054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.956071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.956091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.956107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.956121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.956882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.956933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.965282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.965367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.965523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.965559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.946 [2024-10-01 13:52:34.965580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.965631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.965656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.946 [2024-10-01 13:52:34.965672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.965709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.965733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.965760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.965778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.965795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.965814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.965829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.965842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.965873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.965892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.977099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.977188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.977390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.977456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.946 [2024-10-01 13:52:34.977478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.977533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.977558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.946 [2024-10-01 13:52:34.977574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.977613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.977638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.977670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.977687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.977704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.977723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.977739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.977752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.977804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.977830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.987839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.987894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.988016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.988059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.946 [2024-10-01 13:52:34.988078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.988128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.988152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.946 [2024-10-01 13:52:34.988168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.988202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.988226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.988253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.988271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.988287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.988305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.946 [2024-10-01 13:52:34.988320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.946 [2024-10-01 13:52:34.988358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.946 [2024-10-01 13:52:34.989583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.989623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.946 [2024-10-01 13:52:34.999401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.999496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.946 [2024-10-01 13:52:34.999647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.999683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.946 [2024-10-01 13:52:34.999703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.999756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.946 [2024-10-01 13:52:34.999780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.946 [2024-10-01 13:52:34.999797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.946 [2024-10-01 13:52:34.999834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.999859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.946 [2024-10-01 13:52:34.999887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:34.999904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:34.999938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:34.999957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:34.999973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:34.999999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.000031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.000051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.009609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.010985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.011136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.011173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.947 [2024-10-01 13:52:35.011193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.012151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.012195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.947 [2024-10-01 13:52:35.012217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.012241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.012427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.012459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.012475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.012492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.012530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.012550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.012565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.012578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.012605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.021010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.021204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.021243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.947 [2024-10-01 13:52:35.021263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.021316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.021359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.021392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.021411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.021427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.021459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.021520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.021547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.947 [2024-10-01 13:52:35.021565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.022515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.022784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.022813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.022829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.022923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.032033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.032122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.033205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.033256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.947 [2024-10-01 13:52:35.033311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.033370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.033395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.947 [2024-10-01 13:52:35.033412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.034084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.034142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.034247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.034272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.034290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.034310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.034325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.034340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.034372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.034392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.042208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.042306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.042402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.042433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.947 [2024-10-01 13:52:35.042451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.042520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.042566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.947 [2024-10-01 13:52:35.042585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.042605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.042640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.042661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.042680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.042696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.042728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.042746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.042761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.042802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.042833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.947 [2024-10-01 13:52:35.053445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.053497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.947 [2024-10-01 13:52:35.053597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.053628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.947 [2024-10-01 13:52:35.053646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.053702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.947 [2024-10-01 13:52:35.053735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.947 [2024-10-01 13:52:35.053751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.947 [2024-10-01 13:52:35.053784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.053807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.947 [2024-10-01 13:52:35.053852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.947 [2024-10-01 13:52:35.053874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.947 [2024-10-01 13:52:35.053889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.947 [2024-10-01 13:52:35.053906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.053940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.053955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.054860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.054922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.063589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.063717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.063853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.063888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.948 [2024-10-01 13:52:35.063908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.064829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.064873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.948 [2024-10-01 13:52:35.064894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.064930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.065145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.065204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.065223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.065239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.066271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.066320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.066339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.066354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.067042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.074773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.074872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.075803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.075854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.948 [2024-10-01 13:52:35.075877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.075947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.075975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.948 [2024-10-01 13:52:35.075993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.076197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.076230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.076267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.076287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.076304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.076323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.076340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.076354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.076387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.076406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.084993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.085102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.085268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.085305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.948 [2024-10-01 13:52:35.085326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.085425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.085451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.948 [2024-10-01 13:52:35.085468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.085757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.085805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.085977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.086005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.086023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.086042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.086057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.086071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.086193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.086224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.096334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.096439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.096596] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.096633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.948 [2024-10-01 13:52:35.096654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.096708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.096733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.948 [2024-10-01 13:52:35.096750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.096796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.096821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.096848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.096867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.096884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.096902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.096937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.096953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.096988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.097036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.948 [2024-10-01 13:52:35.107864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.107966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.948 [2024-10-01 13:52:35.108832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.108882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.948 [2024-10-01 13:52:35.108906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.108981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.948 [2024-10-01 13:52:35.109007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.948 [2024-10-01 13:52:35.109024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.948 [2024-10-01 13:52:35.109216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.109249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.948 [2024-10-01 13:52:35.109285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.109306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.109322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.109341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.948 [2024-10-01 13:52:35.109356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.948 [2024-10-01 13:52:35.109371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.948 [2024-10-01 13:52:35.109403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.109422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.118046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.118122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.118207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.118238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.949 [2024-10-01 13:52:35.118256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.118324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.118351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.949 [2024-10-01 13:52:35.118368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.118388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.118420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.118441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.118456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.118504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.118551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.118581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.118596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.118611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.118639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.128159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.128353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.128390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.949 [2024-10-01 13:52:35.128419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.128472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.128515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.128548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.128567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.128583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.128614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.128675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.128702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.949 [2024-10-01 13:52:35.128721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.128766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.128797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.128815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.128829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.128859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.139270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.139691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.139743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.949 [2024-10-01 13:52:35.139766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.139816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.139857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.139988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.140018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.949 [2024-10-01 13:52:35.140036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.140053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.140067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.140083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.140120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.140143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.140173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.140190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.140204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.140231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.150279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.150388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.150535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.150592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.949 [2024-10-01 13:52:35.150613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.150669] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.150694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.949 [2024-10-01 13:52:35.150711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.151703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.151751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.152041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.152073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.152092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.152112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.152128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.152142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.153468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.153507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.161674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.162406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.162568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.162606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.949 [2024-10-01 13:52:35.162626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.162758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.949 [2024-10-01 13:52:35.162787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.949 [2024-10-01 13:52:35.162806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.949 [2024-10-01 13:52:35.162835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.162871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.949 [2024-10-01 13:52:35.162893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.162908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.162942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.162978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.162998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.949 [2024-10-01 13:52:35.163013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.949 [2024-10-01 13:52:35.163027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.949 [2024-10-01 13:52:35.163055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.949 [2024-10-01 13:52:35.173438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.949 [2024-10-01 13:52:35.173546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.173654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.173687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.950 [2024-10-01 13:52:35.173706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.173807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.173835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.950 [2024-10-01 13:52:35.173852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.173874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.173938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.173965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.173981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.174029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.174064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.174085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.174099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.174113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.174141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.184893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.184964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.185778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.185824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.950 [2024-10-01 13:52:35.185845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.185899] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.185939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.950 [2024-10-01 13:52:35.185958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.186143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.186174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.186230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.186254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.186270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.186288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.186304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.186318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.186349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.186368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.195073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.195215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.195347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.195382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.950 [2024-10-01 13:52:35.195401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.195472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.195499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.950 [2024-10-01 13:52:35.195550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.195575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.195612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.195633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.195648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.195665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.195697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.195717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.195731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.195745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.195773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.205248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.205489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.205527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.950 [2024-10-01 13:52:35.205548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.205601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.205644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.205677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.205695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.205713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.205745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.205806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.205833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.950 [2024-10-01 13:52:35.205851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.205883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.205930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.205952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.205975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-10-01 13:52:35.206005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-10-01 13:52:35.216379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.216489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.950 [2024-10-01 13:52:35.217462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.217513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.950 [2024-10-01 13:52:35.217536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.217591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-10-01 13:52:35.217616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.950 [2024-10-01 13:52:35.217632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.950 [2024-10-01 13:52:35.217839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.217872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.950 [2024-10-01 13:52:35.217924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-10-01 13:52:35.217948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-10-01 13:52:35.217965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.217984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.218000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.218014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.218047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.218067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.226628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.226760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.226880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.226927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.951 [2024-10-01 13:52:35.226949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.227022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.227049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.951 [2024-10-01 13:52:35.227067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.227088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.227123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.227144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.227159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.227176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.227207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.227266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.227283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.227297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.228593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.236788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.236972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.237009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.951 [2024-10-01 13:52:35.237029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.237082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.237145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.237180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.237198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.237214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.237245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.237305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.237332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.951 [2024-10-01 13:52:35.237349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.237381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.237412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.237430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.237444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.237473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.247799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.247862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.248674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.248720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.951 [2024-10-01 13:52:35.248742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.248801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.248826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.951 [2024-10-01 13:52:35.248843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.249075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.249109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.249145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.249164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.249180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.249198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.249213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.249226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.249257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.249275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.257973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.258108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.258238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.258273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.951 [2024-10-01 13:52:35.258293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.258363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.258391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.951 [2024-10-01 13:52:35.258408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.258430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.258465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.258486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.258502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.258519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.258567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.258590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.258605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.258620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.259953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.268141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.268359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.268396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.951 [2024-10-01 13:52:35.268460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.268517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.268560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.951 [2024-10-01 13:52:35.268593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.268611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.268628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.951 [2024-10-01 13:52:35.268659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.951 [2024-10-01 13:52:35.268722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.951 [2024-10-01 13:52:35.268750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.951 [2024-10-01 13:52:35.268767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.951 [2024-10-01 13:52:35.268799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.951 [2024-10-01 13:52:35.268831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.951 [2024-10-01 13:52:35.268848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.951 [2024-10-01 13:52:35.268863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.268892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.279688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.279800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.280133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.280172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.952 [2024-10-01 13:52:35.280193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.280246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.280271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.952 [2024-10-01 13:52:35.280287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.280370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.280400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.280430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.280449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.280466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.280486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.280501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.280546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.280580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.280599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.291049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.291157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.291308] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.291345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.952 [2024-10-01 13:52:35.291366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.291420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.291444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.952 [2024-10-01 13:52:35.291461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.292415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.292463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.292715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.292753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.292775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.292803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.292819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.292832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.294127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.294164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.302781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.302838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.303040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.303075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.952 [2024-10-01 13:52:35.303095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.303147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.303171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.952 [2024-10-01 13:52:35.303187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.303222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.303288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.303318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.303336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.303352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.303370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.303385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.303408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.303439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.303457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.314379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.314438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.314558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.314591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.952 [2024-10-01 13:52:35.314609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.314661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.314686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.952 [2024-10-01 13:52:35.314703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.314738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.314773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.314801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.314818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.314833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.314850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.314866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.314880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.314910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.314950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.324527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.325810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.325947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.325981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.952 [2024-10-01 13:52:35.326035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.326990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.952 [2024-10-01 13:52:35.327048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.952 [2024-10-01 13:52:35.327069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.952 [2024-10-01 13:52:35.327092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.327342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.952 [2024-10-01 13:52:35.327382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.327401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.327418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.327453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.327473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.952 [2024-10-01 13:52:35.327487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.952 [2024-10-01 13:52:35.327501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.952 [2024-10-01 13:52:35.327530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.952 [2024-10-01 13:52:35.334656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.952 [2024-10-01 13:52:35.334783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.334828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.953 [2024-10-01 13:52:35.334849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.334887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.334934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.334955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.334970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.335001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.335893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.336022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.336066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.953 [2024-10-01 13:52:35.336086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.336119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.336151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.336169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.336212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.336245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.345376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.345509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.345552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.953 [2024-10-01 13:52:35.345573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.345607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.345638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.345656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.345671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.345702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.345989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.346103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.346143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.953 [2024-10-01 13:52:35.346163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.346196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.346228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.346245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.346260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.346289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.356746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.356803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.357636] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.357682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.953 [2024-10-01 13:52:35.357704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.357757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.357782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.953 [2024-10-01 13:52:35.357798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.358026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.358065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.358141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.358162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.358177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.358195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.358210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.358223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.358254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.358272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.366888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.366978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.367069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.367100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.953 [2024-10-01 13:52:35.367118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.367184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.367211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.953 [2024-10-01 13:52:35.367228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.367247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.367280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.367300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.367314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.367328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.368558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.368597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.368616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.368630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.368860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.376993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.377119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.377164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.953 [2024-10-01 13:52:35.377186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.377232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.377302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.377337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.377354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.377368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.377398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.377458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.377484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.953 [2024-10-01 13:52:35.377500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.377532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.953 [2024-10-01 13:52:35.377562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.953 [2024-10-01 13:52:35.377579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.953 [2024-10-01 13:52:35.377593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.953 [2024-10-01 13:52:35.377622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.953 [2024-10-01 13:52:35.387414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.388224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.953 [2024-10-01 13:52:35.388330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.953 [2024-10-01 13:52:35.388371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.953 [2024-10-01 13:52:35.388390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.953 [2024-10-01 13:52:35.388631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.388671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.954 [2024-10-01 13:52:35.388691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.388711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.388759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.388782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.388797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.388811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.388844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.388862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.388877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.388891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.388957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.397520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.397643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.397689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.954 [2024-10-01 13:52:35.397709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.397743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.397781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.397798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.397813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.399085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.399379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.399488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.399528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.954 [2024-10-01 13:52:35.399548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.399581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.399612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.399629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.399643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.399673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.407613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.407736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.407777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.954 [2024-10-01 13:52:35.407798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.407831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.407862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.407880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.407895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.407941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.410419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.411141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.411187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.954 [2024-10-01 13:52:35.411237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.411352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.411403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.411424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.411439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.411470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.418002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.418839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.418892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.954 [2024-10-01 13:52:35.418925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.419132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.419192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.419214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.419228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.419268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.422012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.422126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.422167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.954 [2024-10-01 13:52:35.422187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.422221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.422252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.422269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.422284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.422315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.428096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.428216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.428248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.954 [2024-10-01 13:52:35.428265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.428299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.428331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.428372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.428388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.428421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.433189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.954 [2024-10-01 13:52:35.434042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.954 [2024-10-01 13:52:35.434088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.954 [2024-10-01 13:52:35.434109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.954 [2024-10-01 13:52:35.434295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.954 [2024-10-01 13:52:35.434354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.954 [2024-10-01 13:52:35.434376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.954 [2024-10-01 13:52:35.434391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.954 [2024-10-01 13:52:35.434423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.954 [2024-10-01 13:52:35.438189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.438319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.438361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.955 [2024-10-01 13:52:35.438381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.438414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.438445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.438473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.438488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.438518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.443279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.443393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.443426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.955 [2024-10-01 13:52:35.443444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.443477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.443517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.443534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.443549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.443579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.448550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.449402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.449447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.955 [2024-10-01 13:52:35.449468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.449669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.449730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.449752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.449767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.449798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.453386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.453510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.453550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.955 [2024-10-01 13:52:35.453571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.453604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.453634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.453652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.453666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.453697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.458644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.458757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.458790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.955 [2024-10-01 13:52:35.458808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.458840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.458872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.458889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.458903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.458962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.463816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.464644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.464689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.955 [2024-10-01 13:52:35.464710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.464936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.464997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.465019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.465034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.465066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.468733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.468860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.468902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.955 [2024-10-01 13:52:35.468938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.468974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.469006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.469023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.469037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.469067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.473925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.474037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.474068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.955 [2024-10-01 13:52:35.474086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.474127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.474158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.474176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.474190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.474220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.479074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.479190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.479232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.955 [2024-10-01 13:52:35.479253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.480000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.480220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.480257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.480295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.480340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.484032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.484152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.484193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.955 [2024-10-01 13:52:35.484213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.955 [2024-10-01 13:52:35.484246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.955 [2024-10-01 13:52:35.484277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.955 [2024-10-01 13:52:35.484294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.955 [2024-10-01 13:52:35.484309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.955 [2024-10-01 13:52:35.484340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.955 [2024-10-01 13:52:35.489166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.955 [2024-10-01 13:52:35.489281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.955 [2024-10-01 13:52:35.489312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.955 [2024-10-01 13:52:35.489330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.489363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.489394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.489411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.489425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.489455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.494349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.495191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.495235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.956 [2024-10-01 13:52:35.495256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.495441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.495500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.495522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.495537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.495570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.499254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.499404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.499446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.956 [2024-10-01 13:52:35.499467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.499500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.499531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.499548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.499562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.499592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.504441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.504556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.504601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.956 [2024-10-01 13:52:35.504621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.504655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.504686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.504703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.504717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.504748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.509575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.510421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.510467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.956 [2024-10-01 13:52:35.510488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.510699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.510761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.510783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.510797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.510828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.514528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.514665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.514705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.956 [2024-10-01 13:52:35.514725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.514758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.514811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.514830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.514845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.514875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.519663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.519778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.519818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.956 [2024-10-01 13:52:35.519839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.519871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.519902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.519934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.519950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.519982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.524894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.525727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.525773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.956 [2024-10-01 13:52:35.525794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.525993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.526051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.526074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.526088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.526120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.529750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.529887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.529939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.956 [2024-10-01 13:52:35.529961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.529994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.530025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.530042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.530056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.530105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.535002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.535117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.535150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.956 [2024-10-01 13:52:35.535168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.535201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.535232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.535249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.956 [2024-10-01 13:52:35.535263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.956 [2024-10-01 13:52:35.535293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.956 [2024-10-01 13:52:35.540067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.956 [2024-10-01 13:52:35.540901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.956 [2024-10-01 13:52:35.540961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.956 [2024-10-01 13:52:35.540983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.956 [2024-10-01 13:52:35.541182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.956 [2024-10-01 13:52:35.541243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.956 [2024-10-01 13:52:35.541264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.541278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.541310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.545101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.545227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.545268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.957 [2024-10-01 13:52:35.545288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.545322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.545354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.545371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.545386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.545416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.550157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.550312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.550356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.957 [2024-10-01 13:52:35.550400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.550438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.550470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.550487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.550501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.551784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.555221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.555336] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.555369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.957 [2024-10-01 13:52:35.555387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.556137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.556343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.556379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.556398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.556439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.560278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.560403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.560445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.957 [2024-10-01 13:52:35.560467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.560500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.560531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.560548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.560562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.560591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.565309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.565422] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.565468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.957 [2024-10-01 13:52:35.565489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.565522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.565553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.565589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.565605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.565637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.570613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.570729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.570772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.957 [2024-10-01 13:52:35.570792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.571539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.571759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.571801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.571819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.571860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.575401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.575525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.575558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.957 [2024-10-01 13:52:35.575575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.575609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.575654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.575674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.575689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.575719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.580698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.580815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.580846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.957 [2024-10-01 13:52:35.580863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.580896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.580948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.580968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.580982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.581012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.585986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.586114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.586145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.957 [2024-10-01 13:52:35.586163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.586935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.587140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.587177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.587195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.587236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.590791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.590935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.957 [2024-10-01 13:52:35.590977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.957 [2024-10-01 13:52:35.590997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.957 [2024-10-01 13:52:35.591030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.957 [2024-10-01 13:52:35.591061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.957 [2024-10-01 13:52:35.591078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.957 [2024-10-01 13:52:35.591093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.957 [2024-10-01 13:52:35.591123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.957 [2024-10-01 13:52:35.596075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.957 [2024-10-01 13:52:35.596189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.596221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.958 [2024-10-01 13:52:35.596239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.596272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.596303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.596320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.596336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.596366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.601541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.602381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.602426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.958 [2024-10-01 13:52:35.602447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.602698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.602750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.602771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.602785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.602817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.606167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.606315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.606356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.958 [2024-10-01 13:52:35.606377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.606410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.606460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.606482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.606497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.606527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.611632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.611746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.611778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.958 [2024-10-01 13:52:35.611795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.611828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.611859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.611875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.611890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.611937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.617230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.617352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.617393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.958 [2024-10-01 13:52:35.617413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.618162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.618376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.618413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.618449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.618492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.621727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.621856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.621889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.958 [2024-10-01 13:52:35.621907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.621957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.621989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.622007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.622021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.622051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.627324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.627439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.627471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.958 [2024-10-01 13:52:35.627489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.627522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.627553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.627570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.627584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.627615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.632710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.633545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.633590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.958 [2024-10-01 13:52:35.633611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.633795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.633858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.633880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.633894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.633940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.637414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.637570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.637612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.958 [2024-10-01 13:52:35.637633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.637667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.637698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.637716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.637731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.637761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.642804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.642942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.642997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.958 [2024-10-01 13:52:35.643017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.643055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.958 [2024-10-01 13:52:35.643087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.958 [2024-10-01 13:52:35.643104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.958 [2024-10-01 13:52:35.643118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.958 [2024-10-01 13:52:35.643149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.958 [2024-10-01 13:52:35.648109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.958 [2024-10-01 13:52:35.648938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.958 [2024-10-01 13:52:35.648983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.958 [2024-10-01 13:52:35.649004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.958 [2024-10-01 13:52:35.649213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.649273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.649295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.649309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.649341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.652930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.653049] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.653090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.959 [2024-10-01 13:52:35.653111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.653145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.653197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.653216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.653230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.653260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.658200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.658326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.658358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.959 [2024-10-01 13:52:35.658377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.658411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.658442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.658460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.658474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.658504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.669083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.671396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.671505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.959 [2024-10-01 13:52:35.671552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.673598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.673717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.674278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.674358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.959 [2024-10-01 13:52:35.674400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.674435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.674467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.674500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.676839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.676942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.678854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.678952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.678993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.679255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.680630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.681806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.681850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.959 [2024-10-01 13:52:35.681871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.682096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.683030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.683067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.683088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.683757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.684845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.685434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.685476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.959 [2024-10-01 13:52:35.685497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.686713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.687650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.687688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.687707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.688292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.691298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.691419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.691458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.959 [2024-10-01 13:52:35.691478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.691513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.691544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.691561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.691576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.691608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.694947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.695058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.695097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.959 [2024-10-01 13:52:35.695134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.695170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.695222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.959 [2024-10-01 13:52:35.695244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.959 [2024-10-01 13:52:35.695259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.959 [2024-10-01 13:52:35.696448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.959 [2024-10-01 13:52:35.701388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.959 [2024-10-01 13:52:35.701500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.959 [2024-10-01 13:52:35.701531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.959 [2024-10-01 13:52:35.701549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.959 [2024-10-01 13:52:35.701583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.959 [2024-10-01 13:52:35.701615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.701632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.701647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.701677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.705035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.705150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.705190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.960 [2024-10-01 13:52:35.705211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.705245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.705277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.705296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.705310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.705341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.711604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.711717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.711750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.960 [2024-10-01 13:52:35.711769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.712514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.712732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.712784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.712803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.712847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.715626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.715737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.715768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.960 [2024-10-01 13:52:35.715787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.715820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.715852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.715869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.715885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.715932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.721690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.721802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.721834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.960 [2024-10-01 13:52:35.721853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.721886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.721935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.721957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.721971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.723195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.726597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.726708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.726739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.960 [2024-10-01 13:52:35.726758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.727502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.727699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.727733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.727751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.727792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.731776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.731887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.731936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.960 [2024-10-01 13:52:35.731958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.731992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.732023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.732041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.732055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.732085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.736685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.736796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.736829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.960 [2024-10-01 13:52:35.736848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.736881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.736929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.736950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.736965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.736996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.742656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.742769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.742808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.960 [2024-10-01 13:52:35.742828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.742862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.742893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.742924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.742943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.742975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.746777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.746887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.746931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.960 [2024-10-01 13:52:35.746952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.747004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.747038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.747056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.747070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.747100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.753455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.960 [2024-10-01 13:52:35.753570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.960 [2024-10-01 13:52:35.753601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.960 [2024-10-01 13:52:35.753621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.960 [2024-10-01 13:52:35.753654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.960 [2024-10-01 13:52:35.753686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.960 [2024-10-01 13:52:35.753703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.960 [2024-10-01 13:52:35.753718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.960 [2024-10-01 13:52:35.753750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.960 [2024-10-01 13:52:35.757646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.757757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.757788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.961 [2024-10-01 13:52:35.757806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.757839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.757870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.757887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.757902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.757950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.765011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.765124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.765156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.961 [2024-10-01 13:52:35.765174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.765206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.765238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.765255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.765291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.765324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.768538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.768648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.768680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.961 [2024-10-01 13:52:35.768698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.768731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.768763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.768780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.768794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.768825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.775748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.775857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.775889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.961 [2024-10-01 13:52:35.775906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.775956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.776004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.776022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.776036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.776066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.780027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.780187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.780226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.961 [2024-10-01 13:52:35.780246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.780279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.780310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.780327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.780341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.780371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.786813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.787665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.787708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.961 [2024-10-01 13:52:35.787729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.787905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.787975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.787996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.788011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.788043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.790863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.790987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.791026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.961 [2024-10-01 13:52:35.791047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.791080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.791112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.791130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.791144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.791175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.796935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.797043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.797075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.961 [2024-10-01 13:52:35.797093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.797124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.797155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.797173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.797187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.798415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.801865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.801988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.802031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.961 [2024-10-01 13:52:35.802051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.802790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.803042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.803079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.803097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.803139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.807019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.807128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.807167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.961 [2024-10-01 13:52:35.807188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.961 [2024-10-01 13:52:35.807222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.961 [2024-10-01 13:52:35.807253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.961 [2024-10-01 13:52:35.807270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.961 [2024-10-01 13:52:35.807285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.961 [2024-10-01 13:52:35.807316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.961 [2024-10-01 13:52:35.811969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.961 [2024-10-01 13:52:35.812079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.961 [2024-10-01 13:52:35.812118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.961 [2024-10-01 13:52:35.812139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.812172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.812204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.812221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.812236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.812267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.817891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.818016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.818056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.962 [2024-10-01 13:52:35.818076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.818110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.818142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.818159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.818174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.818223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.822052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.822162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.822201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.962 [2024-10-01 13:52:35.822222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.822255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.822286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.822304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.822318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.822349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.828675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.828784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.828814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.962 [2024-10-01 13:52:35.828831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.828863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.828894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.828927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.828945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.828976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.832858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.832982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.833020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.962 [2024-10-01 13:52:35.833040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.833072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.833103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.833121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.833135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.833164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.840035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.840142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.840180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.962 [2024-10-01 13:52:35.840217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.840251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.840282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.840299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.840314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.840344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.843544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.843652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.843690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.962 [2024-10-01 13:52:35.843709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.843741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.843771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.843788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.843802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.843832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.850718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.850829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.850862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.962 [2024-10-01 13:52:35.850881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.850927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.850962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.850981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.850994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.851026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.855006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.855126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.855165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.962 [2024-10-01 13:52:35.855185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.855219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.855251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.855284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.855299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.855331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 [2024-10-01 13:52:35.861770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.861883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.861934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.962 [2024-10-01 13:52:35.861956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.862706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.862933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.862968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.962 [2024-10-01 13:52:35.862986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.962 [2024-10-01 13:52:35.863028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.962 8039.29 IOPS, 31.40 MiB/s [2024-10-01 13:52:35.867470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.962 [2024-10-01 13:52:35.868332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.962 [2024-10-01 13:52:35.868374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.962 [2024-10-01 13:52:35.868394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.962 [2024-10-01 13:52:35.868573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.962 [2024-10-01 13:52:35.869558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.962 [2024-10-01 13:52:35.869594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.869613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.870255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.871858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.871983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.872023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.963 [2024-10-01 13:52:35.872043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.872077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.872108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.872126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.872140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.872171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.877805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.877931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.877971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.963 [2024-10-01 13:52:35.877992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.878026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.878058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.878076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.878090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.878121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.881954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.882064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.882103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.963 [2024-10-01 13:52:35.882123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.882157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.882189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.882206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.882221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.882251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.888665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.888777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.888816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.963 [2024-10-01 13:52:35.888837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.888871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.888926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.888948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.888963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.888994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.892902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.893027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.893066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.963 [2024-10-01 13:52:35.893103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.893139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.893171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.893189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.893203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.893233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.900876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.901468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.901532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.963 [2024-10-01 13:52:35.901568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.901768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.901958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.902006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.902038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.902118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.904180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.905418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.905471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.963 [2024-10-01 13:52:35.905505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.905759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.905893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.905950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.963 [2024-10-01 13:52:35.905972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.963 [2024-10-01 13:52:35.907585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.963 [2024-10-01 13:52:35.911705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.963 [2024-10-01 13:52:35.911829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.963 [2024-10-01 13:52:35.911871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.963 [2024-10-01 13:52:35.911893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.963 [2024-10-01 13:52:35.911951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.963 [2024-10-01 13:52:35.911991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.963 [2024-10-01 13:52:35.912022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.912047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.912094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.916219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.916403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.916447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.964 [2024-10-01 13:52:35.916469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.916504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.916536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.916553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.916568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.916600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.922978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.923797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.923842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.964 [2024-10-01 13:52:35.923863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.924078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.924138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.924161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.924176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.924208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.927098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.927213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.927254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.964 [2024-10-01 13:52:35.927284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.927318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.927349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.927367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.927381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.927412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.933069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.933207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.933248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.964 [2024-10-01 13:52:35.933269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.933303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.933335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.933352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.933366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.933397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.938344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.938461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.938493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.964 [2024-10-01 13:52:35.938512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.939292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.939492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.939526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.939545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.939608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.943183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.943298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.943331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.964 [2024-10-01 13:52:35.943350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.943382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.943414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.943431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.943445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.943476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.948433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.948548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.948588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.964 [2024-10-01 13:52:35.948614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.948668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.948701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.948720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.948735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.948765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.953721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.953835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.953866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.964 [2024-10-01 13:52:35.953885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.954642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.954847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.954882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.954899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.954956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.958521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.958649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.958689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.964 [2024-10-01 13:52:35.958709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.958742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.958775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.958793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.958807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.958837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.964 [2024-10-01 13:52:35.963812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.964 [2024-10-01 13:52:35.963938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.964 [2024-10-01 13:52:35.963978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.964 [2024-10-01 13:52:35.963999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.964 [2024-10-01 13:52:35.964033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.964 [2024-10-01 13:52:35.964065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.964 [2024-10-01 13:52:35.964082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.964 [2024-10-01 13:52:35.964114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.964 [2024-10-01 13:52:35.964149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:35.969059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:35.969172] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:35.969206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.965 [2024-10-01 13:52:35.969225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:35.969969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:35.970166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:35.970201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:35.970219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:35.970260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:35.973902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:35.974023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:35.974054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.965 [2024-10-01 13:52:35.974073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:35.974105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:35.974137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:35.974154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:35.974169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:35.974199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:35.979154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:35.979264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:35.979295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.965 [2024-10-01 13:52:35.979314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:35.979346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:35.979377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:35.979394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:35.979408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:35.979438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:35.984376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:35.984501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:35.984560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.965 [2024-10-01 13:52:35.984583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:35.985334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:35.985547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:35.985582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:35.985600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:35.985663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:35.989242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:35.989358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:35.989390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.965 [2024-10-01 13:52:35.989409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:35.989442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:35.989475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:35.989493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:35.989507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:35.989538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:35.994473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:35.994597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:35.994637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.965 [2024-10-01 13:52:35.994657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:35.994691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:35.994722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:35.994739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:35.994754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:35.994784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:35.999701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:35.999814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:35.999852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.965 [2024-10-01 13:52:35.999872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:36.000617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:36.000834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:36.000868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:36.000886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:36.000941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:36.004559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:36.004673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:36.004712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.965 [2024-10-01 13:52:36.004732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:36.004765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:36.004808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:36.004825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:36.004839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:36.004870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:36.009793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:36.009906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:36.009955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.965 [2024-10-01 13:52:36.009974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.965 [2024-10-01 13:52:36.010008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.965 [2024-10-01 13:52:36.010039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.965 [2024-10-01 13:52:36.010057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.965 [2024-10-01 13:52:36.010071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.965 [2024-10-01 13:52:36.010101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.965 [2024-10-01 13:52:36.015105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.965 [2024-10-01 13:52:36.015947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.965 [2024-10-01 13:52:36.015989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.965 [2024-10-01 13:52:36.016009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.016186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.016242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.016264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.016279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.016310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.019886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.020013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.020062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.966 [2024-10-01 13:52:36.020082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.020115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.020146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.020164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.020177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.020207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.025191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.025303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.025341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.966 [2024-10-01 13:52:36.025361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.025394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.025426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.025444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.025459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.025490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.030278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.030391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.030424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.966 [2024-10-01 13:52:36.030442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.031201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.031401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.031435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.031454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.031514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.035280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.035390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.035429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.966 [2024-10-01 13:52:36.035471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.035507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.035556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.035578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.035593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.035624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.040367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.040478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.040511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.966 [2024-10-01 13:52:36.040530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.040563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.040594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.040611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.040626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.040655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.045530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.045644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.045686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.966 [2024-10-01 13:52:36.045707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.046453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.046670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.046706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.046724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.046766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.050453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.050574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.050614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.966 [2024-10-01 13:52:36.050635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.050668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.050700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.050735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.050757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.050789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.055622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.055744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.055785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.966 [2024-10-01 13:52:36.055806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.055846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.055878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.055895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.055923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.055961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.060765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.060880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.966 [2024-10-01 13:52:36.060931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.966 [2024-10-01 13:52:36.060954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.966 [2024-10-01 13:52:36.061683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.966 [2024-10-01 13:52:36.061900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.966 [2024-10-01 13:52:36.061949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.966 [2024-10-01 13:52:36.061967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.966 [2024-10-01 13:52:36.062009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.966 [2024-10-01 13:52:36.065737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-10-01 13:52:36.065847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.065880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.967 [2024-10-01 13:52:36.065898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.065948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.065983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.066001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.066016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.066047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.070853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.071000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.071032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.967 [2024-10-01 13:52:36.071051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.071083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.071114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.071132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.071146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.071177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.076107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.076222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.076255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.967 [2024-10-01 13:52:36.076274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.077018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.077216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.077251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.077269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.077310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.080972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.081081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.081120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.967 [2024-10-01 13:52:36.081140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.081173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.081205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.081223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.081237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.081267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.086195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.086306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.086338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.967 [2024-10-01 13:52:36.086356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.086407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.086453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.086470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.086485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.086515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.091408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.091520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.091559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.967 [2024-10-01 13:52:36.091580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.092325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.092522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.092556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.092574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.092615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.096279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.096391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.096430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.967 [2024-10-01 13:52:36.096451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.096484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.096516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.096534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.096548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.096579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.101501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.101611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.101649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.967 [2024-10-01 13:52:36.101669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.101701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.101732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.101750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.101783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.101817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.106499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.106623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.106708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.967 [2024-10-01 13:52:36.106733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.107480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.107678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.967 [2024-10-01 13:52:36.107713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.967 [2024-10-01 13:52:36.107731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.967 [2024-10-01 13:52:36.107792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.967 [2024-10-01 13:52:36.111587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.967 [2024-10-01 13:52:36.111696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.967 [2024-10-01 13:52:36.111728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.967 [2024-10-01 13:52:36.111746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.967 [2024-10-01 13:52:36.111778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.967 [2024-10-01 13:52:36.111826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.111848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.111863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.111894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.116599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.116710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.116742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.968 [2024-10-01 13:52:36.116761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.116798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.116829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.116847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.116861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.116891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.121676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.121788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.121848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.968 [2024-10-01 13:52:36.121869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.122629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.122828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.122863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.122881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.122936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.126685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.126810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.126849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.968 [2024-10-01 13:52:36.126871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.126905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.126954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.126973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.126987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.127019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.131769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.131890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.131945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.968 [2024-10-01 13:52:36.131967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.132002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.132034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.132052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.132066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.132097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.137180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.137300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.137341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.968 [2024-10-01 13:52:36.137362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.138140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.138392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.138428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.138447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.138489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.141862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.141987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.142026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.968 [2024-10-01 13:52:36.142046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.142080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.142111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.142133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.142148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.142179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.147276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.147386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.147419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.968 [2024-10-01 13:52:36.147437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.147469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.147500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.147517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.147533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.147563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.152605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.152718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.152751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.968 [2024-10-01 13:52:36.152769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.153514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.153715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.153749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.153767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.153808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.157362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.157478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.157516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.968 [2024-10-01 13:52:36.157536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.968 [2024-10-01 13:52:36.157569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.968 [2024-10-01 13:52:36.157600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.968 [2024-10-01 13:52:36.157618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.968 [2024-10-01 13:52:36.157632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.968 [2024-10-01 13:52:36.157663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.968 [2024-10-01 13:52:36.162695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.968 [2024-10-01 13:52:36.162815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.968 [2024-10-01 13:52:36.162847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.968 [2024-10-01 13:52:36.162865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.969 [2024-10-01 13:52:36.162897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.969 [2024-10-01 13:52:36.162955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.969 [2024-10-01 13:52:36.162975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.969 [2024-10-01 13:52:36.162989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.969 [2024-10-01 13:52:36.163021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.969 [2024-10-01 13:52:36.167903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.969 [2024-10-01 13:52:36.168029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.969 [2024-10-01 13:52:36.168069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.969 [2024-10-01 13:52:36.168089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.969 [2024-10-01 13:52:36.168827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.969 [2024-10-01 13:52:36.169038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.969 [2024-10-01 13:52:36.169072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.969 [2024-10-01 13:52:36.169101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.969 [2024-10-01 13:52:36.169143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.969 [2024-10-01 13:52:36.172789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.969 [2024-10-01 13:52:36.172900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.969 [2024-10-01 13:52:36.172958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.969 [2024-10-01 13:52:36.172997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.969 [2024-10-01 13:52:36.173034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.969 [2024-10-01 13:52:36.173066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.969 [2024-10-01 13:52:36.173083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.969 [2024-10-01 13:52:36.173098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.969 [2024-10-01 13:52:36.173128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.969 [2024-10-01 13:52:36.178005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.969 [2024-10-01 13:52:36.178123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.969 [2024-10-01 13:52:36.178155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.969 [2024-10-01 13:52:36.178173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.969 [2024-10-01 13:52:36.178206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.969 [2024-10-01 13:52:36.178237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.969 [2024-10-01 13:52:36.178255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.969 [2024-10-01 13:52:36.178269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.969 [2024-10-01 13:52:36.178300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.969 [2024-10-01 13:52:36.183248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.969 [2024-10-01 13:52:36.183362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.969 [2024-10-01 13:52:36.183404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.969 [2024-10-01 13:52:36.183425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.969 [2024-10-01 13:52:36.184169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.969 [2024-10-01 13:52:36.184367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.969 [2024-10-01 13:52:36.184403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.969 [2024-10-01 13:52:36.184422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.969 [2024-10-01 13:52:36.184483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.969 [2024-10-01 13:52:36.188099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.969 [2024-10-01 13:52:36.188210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.969 [2024-10-01 13:52:36.188249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.969 [2024-10-01 13:52:36.188270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.188302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.188334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.188374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.188391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.188422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.193338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.193451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.193490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.970 [2024-10-01 13:52:36.193511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.193544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.193576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.193593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.193608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.193638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.198501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.198622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.198656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.970 [2024-10-01 13:52:36.198674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.199426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.199624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.199659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.199677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.199719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.203429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.203538] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.203587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.970 [2024-10-01 13:52:36.203607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.203641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.203672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.203689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.203704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.203734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.208596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.208727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.208760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.970 [2024-10-01 13:52:36.208778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.208811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.208842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.208860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.208874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.208906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.213783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.213895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.213946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.970 [2024-10-01 13:52:36.213967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.214706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.214926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.214959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.214976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.215018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.218698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.218808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.218839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.970 [2024-10-01 13:52:36.218857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.218890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.218937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.218957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.218972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.219012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.223869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.223997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.224037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.970 [2024-10-01 13:52:36.224057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.224110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.224143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.224160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.224175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.224206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.229138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.229252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.229292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.970 [2024-10-01 13:52:36.229312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.230065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.230273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.230308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.230326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.230386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.233972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.234082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.234114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.970 [2024-10-01 13:52:36.234132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.234165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.234196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.234213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.234228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.234258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.239231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.239344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.239383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.970 [2024-10-01 13:52:36.239402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.239435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.970 [2024-10-01 13:52:36.239466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.970 [2024-10-01 13:52:36.239483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.970 [2024-10-01 13:52:36.239518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.970 [2024-10-01 13:52:36.239551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.970 [2024-10-01 13:52:36.244536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.970 [2024-10-01 13:52:36.244664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.970 [2024-10-01 13:52:36.244702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.970 [2024-10-01 13:52:36.244722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.970 [2024-10-01 13:52:36.245478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.245682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.245716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.245734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.245775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.249321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.249431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.249463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.971 [2024-10-01 13:52:36.249481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.249513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.249544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.249562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.249576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.249606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.254630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.254741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.254780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.971 [2024-10-01 13:52:36.254801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.254834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.254866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.254883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.254898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.254980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.259899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.260027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.260078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.971 [2024-10-01 13:52:36.260098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.260831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.261045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.261080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.261098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.261139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.264720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.264837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.264870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.971 [2024-10-01 13:52:36.264892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.264940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.264975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.264992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.265006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.265037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.270006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.270118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.270150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.971 [2024-10-01 13:52:36.270168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.270200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.270231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.270249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.270264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.270295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.275099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.275213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.275246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.971 [2024-10-01 13:52:36.275265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.276016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.276232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.276266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.276283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.276343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.280098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.280213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.280245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.971 [2024-10-01 13:52:36.280264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.280296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.280345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.280367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.280382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.280413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.285190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.285301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.285333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.971 [2024-10-01 13:52:36.285352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.285385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.285415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.285433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.285448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.285478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.290345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.290457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.290497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.971 [2024-10-01 13:52:36.290516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.291272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.291471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.291506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.291524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.291588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.295278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.295389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.295421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.971 [2024-10-01 13:52:36.295440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.295473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.295504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.295522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.971 [2024-10-01 13:52:36.295536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.971 [2024-10-01 13:52:36.295566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.971 [2024-10-01 13:52:36.300433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.971 [2024-10-01 13:52:36.300545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.971 [2024-10-01 13:52:36.300586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.971 [2024-10-01 13:52:36.300606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.971 [2024-10-01 13:52:36.300639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.971 [2024-10-01 13:52:36.300671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.971 [2024-10-01 13:52:36.300688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.300703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.300733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.305587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.305701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.305746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.972 [2024-10-01 13:52:36.305767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.306514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.306746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.306782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.306800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.306840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.310523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.310667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.310699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.972 [2024-10-01 13:52:36.310752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.310788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.310821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.310839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.310854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.310884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.315677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.315794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.315826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.972 [2024-10-01 13:52:36.315844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.315876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.315907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.315945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.315961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.315992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.320957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.321075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.321116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.972 [2024-10-01 13:52:36.321137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.321887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.322106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.322141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.322160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.322201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.325774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.325886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.325932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.972 [2024-10-01 13:52:36.325953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.325988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.326020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.326070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.326087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.326119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.331053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.331173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.331213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.972 [2024-10-01 13:52:36.331234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.331268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.331300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.331318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.331334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.331364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.336372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.337225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.337268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.972 [2024-10-01 13:52:36.337289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.337468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.337525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.337546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.337561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.337592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.341144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.341257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.341290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.972 [2024-10-01 13:52:36.341309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.341343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.341374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.341392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.341407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.341438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.346466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.346626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.346658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.972 [2024-10-01 13:52:36.346676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.346709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.346742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.346759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.346773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.346805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.351729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.352602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.352647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.972 [2024-10-01 13:52:36.352669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.352853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.972 [2024-10-01 13:52:36.352944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.972 [2024-10-01 13:52:36.352971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.972 [2024-10-01 13:52:36.352988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.972 [2024-10-01 13:52:36.353021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.972 [2024-10-01 13:52:36.356597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.972 [2024-10-01 13:52:36.356715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.972 [2024-10-01 13:52:36.356754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.972 [2024-10-01 13:52:36.356774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.972 [2024-10-01 13:52:36.356808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.356840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.356858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.356873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.356904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.361833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.361978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.362011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.973 [2024-10-01 13:52:36.362031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.362097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.362130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.362149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.362164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.362196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.367103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.367950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.367992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.973 [2024-10-01 13:52:36.368013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.368197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.368254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.368275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.368290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.368322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.371948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.372063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.372096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.973 [2024-10-01 13:52:36.372115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.372148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.372180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.372197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.372212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.372243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.377192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.377307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.377347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.973 [2024-10-01 13:52:36.377367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.377401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.377433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.377450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.377490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.377523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.382410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.383294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.383339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.973 [2024-10-01 13:52:36.383360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.383558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.383616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.383638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.383654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.383687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.387296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.387415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.387446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.973 [2024-10-01 13:52:36.387465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.387497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.387529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.387546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.387561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.387592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.392506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.392622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.392662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.973 [2024-10-01 13:52:36.392682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.392716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.392748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.392766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.392781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.392812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.397644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.397760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.397829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.973 [2024-10-01 13:52:36.397851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.398622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.398831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.398866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.398884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.398939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.402594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.402707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.402739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.973 [2024-10-01 13:52:36.402757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.402789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.402820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.402838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.402853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.402883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.973 [2024-10-01 13:52:36.407734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.973 [2024-10-01 13:52:36.407848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.973 [2024-10-01 13:52:36.407886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.973 [2024-10-01 13:52:36.407906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.973 [2024-10-01 13:52:36.407956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.973 [2024-10-01 13:52:36.407988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.973 [2024-10-01 13:52:36.408007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.973 [2024-10-01 13:52:36.408022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.973 [2024-10-01 13:52:36.408053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.412967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.413803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.413847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.974 [2024-10-01 13:52:36.413869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.414074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.414166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.414190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.414205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.414238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.417822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.417958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.417997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.974 [2024-10-01 13:52:36.418017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.418053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.418085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.418102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.418117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.418148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.423066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.423199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.423239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.974 [2024-10-01 13:52:36.423260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.423294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.423325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.423344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.423360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.423391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.428390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.429272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.429318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.974 [2024-10-01 13:52:36.429340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.429528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.429586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.429609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.429625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.429693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.433163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.433280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.433319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.974 [2024-10-01 13:52:36.433339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.433373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.433404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.433422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.433437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.433468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.438503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.438644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.438676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.974 [2024-10-01 13:52:36.438695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.438729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.438761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.438779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.438794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.438825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.443813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.444692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.444737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.974 [2024-10-01 13:52:36.444758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.444957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.445014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.445036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.445055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.445088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.448614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.448724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.448765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.974 [2024-10-01 13:52:36.448819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.448856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.448888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.448905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.448936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.448970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.453926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.454041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.454080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.974 [2024-10-01 13:52:36.454100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.454133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.454165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.454183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.454197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.454228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.459124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.459239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.459278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.974 [2024-10-01 13:52:36.459298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.460063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.460269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.460303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.460321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.460384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.464017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.464129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.464163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.974 [2024-10-01 13:52:36.464181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.974 [2024-10-01 13:52:36.464214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.974 [2024-10-01 13:52:36.464246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.974 [2024-10-01 13:52:36.464293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.974 [2024-10-01 13:52:36.464309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.974 [2024-10-01 13:52:36.464341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.974 [2024-10-01 13:52:36.469216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.974 [2024-10-01 13:52:36.469333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.974 [2024-10-01 13:52:36.469366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.975 [2024-10-01 13:52:36.469384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.469417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.469449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.469467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.469482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.469522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.474409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.474524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.474576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.975 [2024-10-01 13:52:36.474597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.475349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.475555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.475590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.475608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.475649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.479315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.479424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.479465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.975 [2024-10-01 13:52:36.479485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.479518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.479549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.479566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.479581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.479612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.484502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.484639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.484679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.975 [2024-10-01 13:52:36.484699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.484733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.484765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.484782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.484797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.484828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.489573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.489692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.489732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.975 [2024-10-01 13:52:36.489752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.490497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.490725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.490761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.490778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.490820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.494612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.494731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.494764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.975 [2024-10-01 13:52:36.494783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.494815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.494847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.494864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.494879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.494909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.499667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.499778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.499809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.975 [2024-10-01 13:52:36.499828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.499881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.499931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.499953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.499968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.500000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.504789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.504900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.504962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.975 [2024-10-01 13:52:36.504983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.505713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.505908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.505954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.505971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.506012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.509759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.509870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.509908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.975 [2024-10-01 13:52:36.509942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.509976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.510007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.510025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.510040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.510071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.514876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.514999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.515041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.975 [2024-10-01 13:52:36.515061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.515094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.975 [2024-10-01 13:52:36.515125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.975 [2024-10-01 13:52:36.515142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.975 [2024-10-01 13:52:36.515174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.975 [2024-10-01 13:52:36.515208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.975 [2024-10-01 13:52:36.520039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.975 [2024-10-01 13:52:36.520152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.975 [2024-10-01 13:52:36.520191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.975 [2024-10-01 13:52:36.520212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.975 [2024-10-01 13:52:36.520954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.521151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.521186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.521203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.521246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.524979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.525089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.525121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.976 [2024-10-01 13:52:36.525139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.525173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.525204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.525222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.525236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.525266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.530132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.530244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.530275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.976 [2024-10-01 13:52:36.530293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.530326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.530357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.530375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.530389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.530420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.535333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.535447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.535513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.976 [2024-10-01 13:52:36.535536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.536294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.536499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.536534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.536552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.536615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.540219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.540329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.540368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.976 [2024-10-01 13:52:36.540388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.540422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.540453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.540471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.540486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.540516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.545419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.545532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.545574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.976 [2024-10-01 13:52:36.545594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.545628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.545659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.545677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.545692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.545723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.550670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.550785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.550823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.976 [2024-10-01 13:52:36.550844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.551602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.551844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.551879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.551897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.551956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.555508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.555619] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.555658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.976 [2024-10-01 13:52:36.555679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.555713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.555744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.555762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.555777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.555808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.560762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.560876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.560908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.976 [2024-10-01 13:52:36.560944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.560978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.561010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.561027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.561041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.561072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.566054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.566897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.566952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.976 [2024-10-01 13:52:36.566974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.567159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.567215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.567238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.567255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.567319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.570851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.570977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.571016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.976 [2024-10-01 13:52:36.571037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.571070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.976 [2024-10-01 13:52:36.571101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.976 [2024-10-01 13:52:36.571118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.976 [2024-10-01 13:52:36.571133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.976 [2024-10-01 13:52:36.571164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.976 [2024-10-01 13:52:36.576148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.976 [2024-10-01 13:52:36.576259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.976 [2024-10-01 13:52:36.576298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.976 [2024-10-01 13:52:36.576319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.976 [2024-10-01 13:52:36.576352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.576383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.576401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.576415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.576446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.581233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.581344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.581387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.977 [2024-10-01 13:52:36.581408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.582154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.582358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.582393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.582410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.582452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.586232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.586343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.586374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.977 [2024-10-01 13:52:36.586413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.586449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.586480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.586510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.586525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.586602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.591321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.591448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.591479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.977 [2024-10-01 13:52:36.591496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.591528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.591558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.591575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.591589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.591634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.596726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.596837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.596876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.977 [2024-10-01 13:52:36.596896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.597637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.597836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.597872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.597890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.597945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.601407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.601526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.601558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.977 [2024-10-01 13:52:36.601576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.601609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.601642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.601677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.601692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.601723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.606812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.606939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.606971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.977 [2024-10-01 13:52:36.606990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.607023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.607055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.607072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.607087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.607117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.612003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.612133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.612165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.977 [2024-10-01 13:52:36.612184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.612949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.613177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.613213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.613232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.613274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.616906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.617056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.617089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.977 [2024-10-01 13:52:36.617111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.617161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.617197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.617215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.617230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.617261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.622108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.622266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.622300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.977 [2024-10-01 13:52:36.622319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.622352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.977 [2024-10-01 13:52:36.622384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.977 [2024-10-01 13:52:36.622402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.977 [2024-10-01 13:52:36.622416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.977 [2024-10-01 13:52:36.622447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.977 [2024-10-01 13:52:36.627372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.977 [2024-10-01 13:52:36.628220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.977 [2024-10-01 13:52:36.628266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.977 [2024-10-01 13:52:36.628288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.977 [2024-10-01 13:52:36.628469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.628527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.628549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.628564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.628597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.632239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.632356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.632389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.978 [2024-10-01 13:52:36.632408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.632440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.632472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.632490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.632505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.632535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.637464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.637582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.637616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.978 [2024-10-01 13:52:36.637635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.637688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.637721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.637739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.637754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.637785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.642572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.642689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.642721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.978 [2024-10-01 13:52:36.642740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.643490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.643714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.643750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.643768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.643811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.647556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.647667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.647699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.978 [2024-10-01 13:52:36.647717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.647750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.647781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.647799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.647814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.647844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.652660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.652774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.652806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.978 [2024-10-01 13:52:36.652824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.652857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.652889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.652907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.652956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.652992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.657804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.657932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.657966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.978 [2024-10-01 13:52:36.657985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.658739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.658967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.659001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.659019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.659060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.662752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.662872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.662903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.978 [2024-10-01 13:52:36.662937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.662972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.663004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.663021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.663036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.663066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.667889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.668016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.668048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.978 [2024-10-01 13:52:36.668066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.668098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.668130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.668148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.668162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.668194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.673139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.673253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.673303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.978 [2024-10-01 13:52:36.673323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.674068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.674292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.674328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.674346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.674388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.677989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.678098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.678129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.978 [2024-10-01 13:52:36.678148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.678180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.678226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.978 [2024-10-01 13:52:36.678247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.978 [2024-10-01 13:52:36.678262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.978 [2024-10-01 13:52:36.678293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.978 [2024-10-01 13:52:36.683229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.978 [2024-10-01 13:52:36.683341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.978 [2024-10-01 13:52:36.683372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.978 [2024-10-01 13:52:36.683390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.978 [2024-10-01 13:52:36.683422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.978 [2024-10-01 13:52:36.683454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.683472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.683487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.683517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.688443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.688555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.688588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.979 [2024-10-01 13:52:36.688606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.689362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.689581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.689616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.689634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.689674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.693320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.693429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.693461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.979 [2024-10-01 13:52:36.693479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.693511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.693542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.693560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.693574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.693604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.698535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.698660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.698691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.979 [2024-10-01 13:52:36.698709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.698742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.698772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.698790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.698804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.698834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.703759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.703871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.703902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.979 [2024-10-01 13:52:36.703937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.704667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.704863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.704897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.704928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.704991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.708638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.708748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.708781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.979 [2024-10-01 13:52:36.708799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.708843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.708876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.708893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.708908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.708957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.713846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.713972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.714004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.979 [2024-10-01 13:52:36.714022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.714056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.714088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.714105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.714119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.714150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.719051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.719164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.719201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.979 [2024-10-01 13:52:36.719221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.719966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.720163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.720198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.720216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.720276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.723951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.724061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.724092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.979 [2024-10-01 13:52:36.724134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.724172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.724221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.724243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.724258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.724289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.729159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.729331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.729389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.979 [2024-10-01 13:52:36.729424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.729483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.729548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.729582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.729609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.731230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.735728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.735902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.735979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.979 [2024-10-01 13:52:36.736015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.736071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.736122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.736152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.736176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.979 [2024-10-01 13:52:36.736225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.979 [2024-10-01 13:52:36.739290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.979 [2024-10-01 13:52:36.739467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.979 [2024-10-01 13:52:36.739528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.979 [2024-10-01 13:52:36.739566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.979 [2024-10-01 13:52:36.739620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.979 [2024-10-01 13:52:36.739670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.979 [2024-10-01 13:52:36.739733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.979 [2024-10-01 13:52:36.739764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.739816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.746712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.747946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.748010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.980 [2024-10-01 13:52:36.748047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.748293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.748463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.748514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.748549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.750025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.751393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.751573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.751637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.980 [2024-10-01 13:52:36.751674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.753133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.754223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.754284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.754319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.754471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.758585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.758758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.758820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.980 [2024-10-01 13:52:36.758856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.759212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.759455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.759511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.759543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.759689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.761897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.762086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.762146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.980 [2024-10-01 13:52:36.762181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.763320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.763650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.763708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.763742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.763892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.769625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.769813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.769878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.980 [2024-10-01 13:52:36.769934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.770002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.770056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.770087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.770114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.770184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.774135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.774623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.774686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.980 [2024-10-01 13:52:36.774723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.774948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.775119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.775166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.775197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.775260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.781873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.782063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.782123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.980 [2024-10-01 13:52:36.782158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.782260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.782317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.782350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.782376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.783896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.785168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.785334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.785393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.980 [2024-10-01 13:52:36.785429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.785484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.785536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.785568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.785595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.785645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.792747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.793995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.794059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.980 [2024-10-01 13:52:36.794097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.794374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.794563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.794613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.794645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.796149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.797543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.797725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.797784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.980 [2024-10-01 13:52:36.797819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.799331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.800448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.800508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.800566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.800734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.804714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.980 [2024-10-01 13:52:36.804877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.980 [2024-10-01 13:52:36.804948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.980 [2024-10-01 13:52:36.804986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.980 [2024-10-01 13:52:36.805330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.980 [2024-10-01 13:52:36.805547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.980 [2024-10-01 13:52:36.805599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.980 [2024-10-01 13:52:36.805630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.980 [2024-10-01 13:52:36.805777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.980 [2024-10-01 13:52:36.808031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.809260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.809330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.981 [2024-10-01 13:52:36.809368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.809621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.809791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.809840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.809873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.811385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.815505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.815667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.815726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.981 [2024-10-01 13:52:36.815761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.815816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.815867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.815899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.815947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.816001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.819932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.820113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.820197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.981 [2024-10-01 13:52:36.820233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.820583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.820831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.820884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.820931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.821094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.827526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.827689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.827745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.981 [2024-10-01 13:52:36.827778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.827830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.827877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.827906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.827951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.828002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.831555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.831717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.831776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.981 [2024-10-01 13:52:36.831810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.831865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.831934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.831967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.831993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.832965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.839219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.839523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.839569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.981 [2024-10-01 13:52:36.839590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.839670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.840896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.840945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.840964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.841806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.842074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.842187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.842227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.981 [2024-10-01 13:52:36.842247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.842282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.842313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.842331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.842345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.842376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.849321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.849437] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.849469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.981 [2024-10-01 13:52:36.849488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.849521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.849553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.849570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.849584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.849617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.852480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.852593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.852632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.981 [2024-10-01 13:52:36.852652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.852686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.852718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.852736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.852750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.853688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.859570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.859694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.981 [2024-10-01 13:52:36.859729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.981 [2024-10-01 13:52:36.859747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.981 [2024-10-01 13:52:36.859781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.981 [2024-10-01 13:52:36.859813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.981 [2024-10-01 13:52:36.859831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.981 [2024-10-01 13:52:36.859846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.981 [2024-10-01 13:52:36.859876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.981 [2024-10-01 13:52:36.863721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.981 [2024-10-01 13:52:36.863836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.863877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.982 [2024-10-01 13:52:36.863898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.863948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.863982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.864000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.864015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.864047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 8081.38 IOPS, 31.57 MiB/s [2024-10-01 13:52:36.872042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.873306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.873350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.982 [2024-10-01 13:52:36.873371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.874243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.874389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.874425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.874444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.874482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.874507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.874602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.874634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.982 [2024-10-01 13:52:36.874677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.874712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.874744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.874762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.874776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.874807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.883089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.883203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.883235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.982 [2024-10-01 13:52:36.883253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.883286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.883328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.883348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.883362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.883393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.885476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.885727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.885767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.982 [2024-10-01 13:52:36.885788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.885843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.885880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.885898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.885927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.885964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.893183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.893295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.893327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.982 [2024-10-01 13:52:36.893345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.893600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.893768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.893819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.893838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.893983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.896105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.896215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.896254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.982 [2024-10-01 13:52:36.896274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.896307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.896339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.896357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.896371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.896403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.903276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.903388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.903427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.982 [2024-10-01 13:52:36.903447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.903481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.903513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.903530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.903545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.903576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.907210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.907321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.907354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.982 [2024-10-01 13:52:36.907373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.907406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.907437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.907455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.907469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.907500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.913660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.914485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.914529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.982 [2024-10-01 13:52:36.914562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.914734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.914809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.914833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.914848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.914881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.917541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.917652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.917691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.982 [2024-10-01 13:52:36.917711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.982 [2024-10-01 13:52:36.917745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.982 [2024-10-01 13:52:36.917776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.982 [2024-10-01 13:52:36.917794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.982 [2024-10-01 13:52:36.917809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.982 [2024-10-01 13:52:36.917840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.982 [2024-10-01 13:52:36.925022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.982 [2024-10-01 13:52:36.925145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.982 [2024-10-01 13:52:36.925176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.982 [2024-10-01 13:52:36.925195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.925228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.925259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.925277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.925292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.925322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.928166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.928987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.929030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.983 [2024-10-01 13:52:36.929051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.929247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.929303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.929325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.929340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.929372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.936126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.936239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.936281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.983 [2024-10-01 13:52:36.936302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.936335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.936367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.936384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.936398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.936429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.939529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.939641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.939672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.983 [2024-10-01 13:52:36.939692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.939724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.939756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.939774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.939789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.940700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.946510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.946634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.946673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.983 [2024-10-01 13:52:36.946693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.946727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.946758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.946775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.946809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.946843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.950578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.950689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.950728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.983 [2024-10-01 13:52:36.950749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.950781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.950813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.950831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.950845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.950884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.957096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.957905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.957984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.983 [2024-10-01 13:52:36.958006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.958179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.958235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.958256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.958271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.958303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.960990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.961099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.961131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.983 [2024-10-01 13:52:36.961149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.961182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.961213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.961230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.961245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.961274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.968434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.968562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.968602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.983 [2024-10-01 13:52:36.968623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.968657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.968689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.968707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.968721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.969621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.971516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.972333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.972375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.983 [2024-10-01 13:52:36.972396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.972565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.972639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.972668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.972684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.972717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.979482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.979593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.979631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.983 [2024-10-01 13:52:36.979652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.979685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.979716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.983 [2024-10-01 13:52:36.979733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.983 [2024-10-01 13:52:36.979748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.983 [2024-10-01 13:52:36.979779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.983 [2024-10-01 13:52:36.982842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.983 [2024-10-01 13:52:36.982965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.983 [2024-10-01 13:52:36.983004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.983 [2024-10-01 13:52:36.983025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.983 [2024-10-01 13:52:36.983058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.983 [2024-10-01 13:52:36.983107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:36.983127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:36.983142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:36.983173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:36.989866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:36.989995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:36.990035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.984 [2024-10-01 13:52:36.990056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:36.990089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:36.990119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:36.990136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:36.990151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:36.990182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:36.993952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:36.994062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:36.994101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.984 [2024-10-01 13:52:36.994122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:36.994156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:36.994187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:36.994204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:36.994232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:36.994264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.000498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.001360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.001404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.984 [2024-10-01 13:52:37.001425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.001621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.001679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.001701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.001718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.001778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.004459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.004572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.004606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.984 [2024-10-01 13:52:37.004625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.004657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.004688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.004706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.004721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.004752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.012001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.012120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.012153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.984 [2024-10-01 13:52:37.012172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.012205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.012237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.012255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.012270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.012301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.015112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.015933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.015975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.984 [2024-10-01 13:52:37.015995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.016180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.016237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.016259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.016274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.016306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.023113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.023229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.023261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.984 [2024-10-01 13:52:37.023305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.023341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.023373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.023391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.023405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.023436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.026487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.026607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.026646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.984 [2024-10-01 13:52:37.026667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.026700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.026739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.026757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.026772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.027683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.033478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.033593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.033632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.984 [2024-10-01 13:52:37.033653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.033687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.033718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.033735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.033751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.033781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.037625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.037735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.037774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.984 [2024-10-01 13:52:37.037795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.037828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.037860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.037892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.037908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.984 [2024-10-01 13:52:37.037961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.984 [2024-10-01 13:52:37.044172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.984 [2024-10-01 13:52:37.044995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.984 [2024-10-01 13:52:37.045037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.984 [2024-10-01 13:52:37.045058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.984 [2024-10-01 13:52:37.045228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.984 [2024-10-01 13:52:37.045284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.984 [2024-10-01 13:52:37.045305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.984 [2024-10-01 13:52:37.045319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.045351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.048114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.048225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.048263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.985 [2024-10-01 13:52:37.048284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.048317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.048348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.048366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.048381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.048412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.055588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.055701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.055732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.985 [2024-10-01 13:52:37.055750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.055783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.055815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.055832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.055847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.055877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.058759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.059591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.059634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.985 [2024-10-01 13:52:37.059655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.059836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.059925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.059951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.059967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.060000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.066875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.067056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.067101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.985 [2024-10-01 13:52:37.067122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.067157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.067204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.067225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.067241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.067274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.068854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.070196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.070241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95d2e0 with addr=10.0.0.3, port=4420 00:18:34.985 [2024-10-01 13:52:37.070262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95d2e0 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.070486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d2e0 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.070558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.070583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.070599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.070632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.077611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.077785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.077821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.985 [2024-10-01 13:52:37.077840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.077936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.077975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.077994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.078011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.078043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.078953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.089424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.089602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.089638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.985 [2024-10-01 13:52:37.089658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.089707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.089765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.089789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.089806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.089855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.096003] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:34.985 [2024-10-01 13:52:37.100843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.100987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.101041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.985 [2024-10-01 13:52:37.101062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.101103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.101140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.101159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.101174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.101210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.111726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.111964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.112006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.985 [2024-10-01 13:52:37.112028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.112146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.112414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.985 [2024-10-01 13:52:37.112449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.985 [2024-10-01 13:52:37.112468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.985 [2024-10-01 13:52:37.112537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.985 [2024-10-01 13:52:37.121852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.985 [2024-10-01 13:52:37.122001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.985 [2024-10-01 13:52:37.122035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.985 [2024-10-01 13:52:37.122053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.985 [2024-10-01 13:52:37.122091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.985 [2024-10-01 13:52:37.122126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.122143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.122158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.122193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.131984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.132110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.132144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.132162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.132201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.132237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.132254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.132269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.132304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.143303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.144358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.144403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.144426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.144577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.144659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.144689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.144707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.144771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.153406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.153534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.153567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.153586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.153623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.153658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.153676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.153691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.153726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.163517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.163649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.163685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.163704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.164441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.164572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.164606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.164624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.164662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.174910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.175083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.175118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.175137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.175177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.175213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.175233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.175249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.175286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.185396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.185584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.185619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.185673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.185717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.185754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.185774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.185791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.185827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.195535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.195715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.195758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.195780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.195821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.195858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.195876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.195893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.195946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.205675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.205807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.205840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.205859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.207384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.207575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.207609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.207627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.207716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.217478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.217597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.217629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.217647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.217685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.217721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.217770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.217786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.217822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.227575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.227702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.227734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.227752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.227789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.227824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.227841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.227855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.986 [2024-10-01 13:52:37.227889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.986 [2024-10-01 13:52:37.237679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.986 [2024-10-01 13:52:37.237795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.986 [2024-10-01 13:52:37.237827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.986 [2024-10-01 13:52:37.237845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.986 [2024-10-01 13:52:37.237883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.986 [2024-10-01 13:52:37.237933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.986 [2024-10-01 13:52:37.237955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.986 [2024-10-01 13:52:37.237970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.238005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.247784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.247900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.247950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.247969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.248007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.248041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.248058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.248073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.248107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.257880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.258009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.258042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.258060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.258096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.258134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.258152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.258166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.258201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.269429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.269546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.269578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.269596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.269632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.269668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.269685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.269700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.269735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.279531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.279648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.279680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.279699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.279735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.279782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.279799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.279813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.279848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.289629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.289747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.289779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.289797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.289855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.291097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.291135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.291153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.291946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.299807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.299937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.299970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.299988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.300025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.300061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.300078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.300092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.300127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.309908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.310035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.310067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.310085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.310121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.310377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.310412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.310429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.310504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.322081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.322933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.322977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.322998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.323104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.323147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.323165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.323198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.323247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.332355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.332472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.332505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.332524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.332560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.332595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.332612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.332627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.332662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.342553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.342668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.342699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.342717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.342753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.342789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.342806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.342821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.342855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.353091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.987 [2024-10-01 13:52:37.353207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.987 [2024-10-01 13:52:37.353239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.987 [2024-10-01 13:52:37.353257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.987 [2024-10-01 13:52:37.353294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.987 [2024-10-01 13:52:37.353340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.987 [2024-10-01 13:52:37.353360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.987 [2024-10-01 13:52:37.353374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.987 [2024-10-01 13:52:37.353408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.987 [2024-10-01 13:52:37.363191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.363306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.363354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.363374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.363412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.363686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.363722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.363741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.363817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.375251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.376098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.376141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.376163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.376265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.376308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.376326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.376341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.376379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.385594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.385734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.385766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.385784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.385821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.385856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.385874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.385888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.385938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.395946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.396072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.396105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.396123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.396160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.396223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.396243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.396257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.396292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.406702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.406819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.406851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.406869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.406907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.406964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.406983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.406998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.407032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.416802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.416934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.416966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.416985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.417023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.417058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.417075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.417089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.417125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.429147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.429991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.430035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.430056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.430158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.430203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.430221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.430236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.430292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.439570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.439731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.439766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.439789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.439829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.439865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.439883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.439899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.439952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.449933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.450070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.450103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.450121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.450160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.450196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.450215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.450230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.450266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.461008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.461124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.461156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.461174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.461212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.461247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.461265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.461279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.461316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.472385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.472507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.472539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.988 [2024-10-01 13:52:37.472589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.988 [2024-10-01 13:52:37.472629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.988 [2024-10-01 13:52:37.472665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.988 [2024-10-01 13:52:37.472683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.988 [2024-10-01 13:52:37.472697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.988 [2024-10-01 13:52:37.472732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.988 [2024-10-01 13:52:37.482850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.988 [2024-10-01 13:52:37.483016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.988 [2024-10-01 13:52:37.483048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.483067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.483106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.483141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.483159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.483174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.483209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.493964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.494165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.494207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.494228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.494267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.494304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.494322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.494348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.494384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.504252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.504391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.504424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.504443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.504482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.504518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.504567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.504583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.504619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.514372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.514514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.514561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.514583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.514622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.514677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.514700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.514715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.515978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.524493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.524643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.524679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.524699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.524738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.524775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.524792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.524808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.524844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.534606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.534768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.534802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.534822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.534860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.534896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.534930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.534949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.534987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.544734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.544886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.544935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.544957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.544999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.545035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.545053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.545068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.545103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.554869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.555027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.555061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.555080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.555678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.555882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.555926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.555948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.556067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.565049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.565212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.565274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.565342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.565618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.565719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.565744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.565762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.565798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.575182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.576577] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.576625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.576646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.989 [2024-10-01 13:52:37.576925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.989 [2024-10-01 13:52:37.577875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.989 [2024-10-01 13:52:37.577924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.989 [2024-10-01 13:52:37.577947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.989 [2024-10-01 13:52:37.578697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.989 [2024-10-01 13:52:37.586008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.989 [2024-10-01 13:52:37.586225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.989 [2024-10-01 13:52:37.586268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.989 [2024-10-01 13:52:37.586290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.586330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.586366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.586383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.586398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.586433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.596705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.596856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.596898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.596934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.596976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.597033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.597056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.597073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.597108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.606832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.606984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.607018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.607037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.607075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.607111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.607129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.607178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.607216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.617084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.617224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.617267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.617286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.617325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.617365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.617383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.617398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.617433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.628208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.628344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.628376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.628394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.628434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.628470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.628487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.628502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.628537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.638495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.638637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.638671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.638689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.638727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.638764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.638781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.638797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.638832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.648620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.648803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.648837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.648862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.648900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.648954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.648973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.648989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.650244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.658777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.658949] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.658996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.659017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.659057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.659094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.659112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.659128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.659856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.670528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.670670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.670703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.670722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.670776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.670817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.670835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.670850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.670885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.680644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.680762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.680794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.680812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.680848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.680927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.680950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.680965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.681002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.690826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.690985] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.691019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.691038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.691084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.691120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.691138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.691153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.691188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.701843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.702004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.702053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.702072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.702110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.990 [2024-10-01 13:52:37.702146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.990 [2024-10-01 13:52:37.702164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.990 [2024-10-01 13:52:37.702180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.990 [2024-10-01 13:52:37.702215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.990 [2024-10-01 13:52:37.712104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.990 [2024-10-01 13:52:37.712227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.990 [2024-10-01 13:52:37.712260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.990 [2024-10-01 13:52:37.712279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.990 [2024-10-01 13:52:37.712316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.712353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.712371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.712386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.712454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.722204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.722322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.722354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.722372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.723607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.723844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.723880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.723897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.724799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.732304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.733112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.733156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.733177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.733286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.733330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.733348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.733363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.733399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.743816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.744023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.744066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.744088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.744137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.744176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.744194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.744210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.744247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.753979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.754123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.754157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.754220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.754264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.754301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.754318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.754333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.754368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.764331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.764583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.764627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.764648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.764765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.764822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.764841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.764857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.764893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.775629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.775749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.775781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.775799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.775847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.775883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.775901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.775932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.775970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.786092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.786241] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.786282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.786302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.786339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.786375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.786427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.786444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.786481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.796215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.796337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.796369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.796387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.796425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.796461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.796479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.796493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.796528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.806318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.806434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.806466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.806484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.806523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.806577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.806596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.806611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.806646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.817789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.818640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.818683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.818704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.818817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.818862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.818881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.818897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.818955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.827894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.828565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.828608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.828628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.828793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.828953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.991 [2024-10-01 13:52:37.828985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.991 [2024-10-01 13:52:37.829003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.991 [2024-10-01 13:52:37.829048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.991 [2024-10-01 13:52:37.838005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.991 [2024-10-01 13:52:37.838163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.991 [2024-10-01 13:52:37.838196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.991 [2024-10-01 13:52:37.838214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.991 [2024-10-01 13:52:37.838252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.991 [2024-10-01 13:52:37.838288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.838305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.838319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.838354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.848140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.848256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.848288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.848306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.848344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.848380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.848397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.848412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.848446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.858245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.858363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.858396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.858414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.858476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.858513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.858531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.858560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.858596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 7992.89 IOPS, 31.22 MiB/s [2024-10-01 13:52:37.870975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.871175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.871208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.871226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.872453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.873549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.873588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.873610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.874348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.881589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.881716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.881749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.881767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.881804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.881840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.881858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.881872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.881907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.892639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.892993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.893028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.893046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.893121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.893163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.893189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.893228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.893266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.903812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.903957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.903990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.904008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.904046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.904082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.904099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.904113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.904148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.914192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.914316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.914348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.914366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.914977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.915188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.915223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.915241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.915353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.924707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.924823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.924855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.924874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.924926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.924971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.924988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.925003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.925048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.934885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.935039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.935084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.935103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.935150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.935189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.935206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.935220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.935255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.945035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.945159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.945191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.945209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.945246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.945282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.945300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.945314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.945349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.955134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.955253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.955285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.955304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.955342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.955378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.992 [2024-10-01 13:52:37.955396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.992 [2024-10-01 13:52:37.955412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.992 [2024-10-01 13:52:37.955447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.992 [2024-10-01 13:52:37.965236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.992 [2024-10-01 13:52:37.966563] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.992 [2024-10-01 13:52:37.966607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.992 [2024-10-01 13:52:37.966629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.992 [2024-10-01 13:52:37.967412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.992 [2024-10-01 13:52:37.967754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:37.967791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:37.967809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:37.967884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:37.976504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:37.976818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:37.976861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:37.976881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:37.977789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:37.978551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:37.978589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:37.978612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:37.978714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:37.986612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:37.986733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:37.986764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:37.986783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:37.986830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:37.986865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:37.986884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:37.986899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:37.986949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:37.997135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:37.997510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:37.997555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:37.997576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:37.997653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:37.997707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:37.997727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:37.997743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:37.997811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.008390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.008526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:38.008559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:38.008578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:38.008617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:38.008652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:38.008670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:38.008685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:38.008720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.018886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.019016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:38.019049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:38.019068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:38.019651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:38.019839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:38.019874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:38.019892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:38.020030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.029643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.029762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:38.029795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:38.029814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:38.029851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:38.029888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:38.029905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:38.029937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:38.029975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.039742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.040098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:38.040169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:38.040192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:38.040339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:38.040471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:38.040499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:38.040516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:38.040561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.050607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.050731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:38.050764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:38.050782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:38.050820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:38.050855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:38.050873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:38.050888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:38.050939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.060711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.060826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:38.060857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:38.060876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:38.060929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:38.060970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:38.060988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:38.061003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:38.061038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.070894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.071042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.993 [2024-10-01 13:52:38.071074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.993 [2024-10-01 13:52:38.071092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.993 [2024-10-01 13:52:38.071129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.993 [2024-10-01 13:52:38.071196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.993 [2024-10-01 13:52:38.071217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.993 [2024-10-01 13:52:38.071232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.993 [2024-10-01 13:52:38.071268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.993 [2024-10-01 13:52:38.081194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.993 [2024-10-01 13:52:38.081319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.081352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.081370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.081413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.081448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.081466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.081480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.081514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.091301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.091418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.091450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.091469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.091507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.091544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.091562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.091577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.091611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.101408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.101525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.101557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.101576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.101612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.101647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.101665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.101681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.101716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.111514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.111632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.111665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.111683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.111720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.111756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.111773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.111788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.111823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.121611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.121727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.121760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.121778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.121815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.121851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.121880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.121895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.121948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.131720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.131855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.131887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.131906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.131961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.131999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.132018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.132033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.132068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.141847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.142021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.142055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.142105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.142147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.142183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.142202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.142218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.142253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.151988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.152117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.152150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.152168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.152206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.152243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.152260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.152275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.152310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.162094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.162225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.162258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.162277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.162316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.162352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.162370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.162386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.162421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.172294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.172444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.172478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.172497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.172536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.172572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.172590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.172640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.172679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.182421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.182602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.182640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.182660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.182700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.182737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.182754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.182770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.182806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.192719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.192885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.192934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.994 [2024-10-01 13:52:38.192957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.994 [2024-10-01 13:52:38.192997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.994 [2024-10-01 13:52:38.193034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.994 [2024-10-01 13:52:38.193051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.994 [2024-10-01 13:52:38.193067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.994 [2024-10-01 13:52:38.193103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.994 [2024-10-01 13:52:38.203029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.994 [2024-10-01 13:52:38.203188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.994 [2024-10-01 13:52:38.203222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.203242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.203281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.203318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.203336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.203353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.203397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.213154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.213338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.213372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.213391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.213430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.213468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.213486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.213501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.213536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.223308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.223443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.223477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.223497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.223538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.223574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.223593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.223609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.223645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.233428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.233590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.233624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.233643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.233683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.233743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.233765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.233781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.233817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.243560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.243720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.243753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.243772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.243838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.243875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.243894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.243909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.243963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.253682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.253853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.253888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.253907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.253967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.254007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.254025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.254041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.254077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.263929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.264132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.264168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.264191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.264230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.264267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.264285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.264301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.264337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.274057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.274202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.274236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.274255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.274293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.274330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.274348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.274389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.274428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.284229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.284352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.284384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.284403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.284440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.284485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.284503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.284518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.284553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.294579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.294696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.294728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.294746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.294783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.294819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.294836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.294851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.294887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.304683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.304801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.304833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.304851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.304888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.304941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.304962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.304977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.305551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.316627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.316994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.317060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.317083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.317158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.317201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.317221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.317236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.317272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.327993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.328114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.995 [2024-10-01 13:52:38.328146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.995 [2024-10-01 13:52:38.328165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.995 [2024-10-01 13:52:38.328202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.995 [2024-10-01 13:52:38.328237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.995 [2024-10-01 13:52:38.328255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.995 [2024-10-01 13:52:38.328269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.995 [2024-10-01 13:52:38.328304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.995 [2024-10-01 13:52:38.338626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.995 [2024-10-01 13:52:38.338753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.338785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.338804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.339400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.339609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.339645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.339663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.339776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.348737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.349568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.349613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.349633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.349763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.349840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.349868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.349884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.349937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.358970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.359091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.359123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.359142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.359179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.359215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.359232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.359247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.359281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.369760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.369877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.369909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.369947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.370522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.370724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.370759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.370777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.370889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.380582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.380700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.380732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.380751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.380788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.380823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.380841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.380855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.380890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.391016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.391153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.391185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.391204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.391242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.391278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.391297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.391312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.391347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.401439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.401619] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.401661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.401680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.401718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.401755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.401773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.401789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.401825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.409174] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9b1a00 was disconnected and freed. reset controller. 00:18:34.996 [2024-10-01 13:52:38.409317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.409398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.411669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.411785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.411818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.411837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.411874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.411909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.411946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.411961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.411998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.412149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.412185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.412203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.412217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.412246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.420829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.420965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.420998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.996 [2024-10-01 13:52:38.421017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.421050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.421081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.421100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.421115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.421147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.422042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.422159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.422198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.996 [2024-10-01 13:52:38.422218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.422251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.422282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.422299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.422313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.422344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.996 [2024-10-01 13:52:38.431207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.996 [2024-10-01 13:52:38.431322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.996 [2024-10-01 13:52:38.431353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.996 [2024-10-01 13:52:38.431372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.996 [2024-10-01 13:52:38.431959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.996 [2024-10-01 13:52:38.432146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.996 [2024-10-01 13:52:38.432180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.996 [2024-10-01 13:52:38.432219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.996 [2024-10-01 13:52:38.432342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.432388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.432479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.432518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.997 [2024-10-01 13:52:38.432537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.432571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.432611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.432630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.432645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.432675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.441712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.441826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.441859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.997 [2024-10-01 13:52:38.441878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.441927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.441963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.441981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.441997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.442027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.442452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.442558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.442595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.997 [2024-10-01 13:52:38.442615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.443208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.443399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.443425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.443440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.443548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.452060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.452195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.452252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.997 [2024-10-01 13:52:38.452273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.452308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.452341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.452359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.452374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.452405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.452525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.452614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.452644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.997 [2024-10-01 13:52:38.452662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.453899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.454704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.454742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.454771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.455107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.462292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.462427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.462459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.997 [2024-10-01 13:52:38.462478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.462511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.462563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.462584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.462600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.462641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.462676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.462760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.462789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.997 [2024-10-01 13:52:38.462807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.462838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.464098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.464137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.464156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.464399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.472391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.472553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.472597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.997 [2024-10-01 13:52:38.472619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.472655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.472691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.472710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.472727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.472758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.472809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.473480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.473522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.997 [2024-10-01 13:52:38.473544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.473730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.473862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.473891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.473908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.473971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.482518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.482706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.482756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.997 [2024-10-01 13:52:38.482778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.482813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.484077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.484120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.484139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.484988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.485338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.485483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.485523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.997 [2024-10-01 13:52:38.485543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.485579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.485628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.997 [2024-10-01 13:52:38.485651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.997 [2024-10-01 13:52:38.485666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.997 [2024-10-01 13:52:38.485698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.997 [2024-10-01 13:52:38.492660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.997 [2024-10-01 13:52:38.492813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.997 [2024-10-01 13:52:38.492848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.997 [2024-10-01 13:52:38.492867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.997 [2024-10-01 13:52:38.492902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.997 [2024-10-01 13:52:38.492954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.492973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.492990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.493021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.496315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.496432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.496463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.998 [2024-10-01 13:52:38.496482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.496515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.496548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.496566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.496581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.496612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.502774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.502921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.502967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.998 [2024-10-01 13:52:38.503020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.503058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.503091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.503110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.503125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.503713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.506685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.506799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.506841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.998 [2024-10-01 13:52:38.506862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.507462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.507649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.507683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.507702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.507813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.512881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.513010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.513042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.998 [2024-10-01 13:52:38.513060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.514291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.515104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.515142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.515161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.515483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.517250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.517361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.517392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.998 [2024-10-01 13:52:38.517411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.517445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.517483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.517531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.517546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.517579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.522984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.523106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.523138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.998 [2024-10-01 13:52:38.523157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.523189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.523222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.523240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.523254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.524476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.527504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.527615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.527646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.998 [2024-10-01 13:52:38.527664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.527696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.527728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.527746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.527761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.527792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.533085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.533200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.533240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.998 [2024-10-01 13:52:38.533261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.533848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.534062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.534098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.534116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.534227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.537683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.537835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.537875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.998 [2024-10-01 13:52:38.537896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.537946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.537998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.538020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.538035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.538066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.544361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.545223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.545268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.998 [2024-10-01 13:52:38.545289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.545608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.545698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.545724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.545739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.545773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.547811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.547936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.547968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.998 [2024-10-01 13:52:38.547987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.548020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.998 [2024-10-01 13:52:38.548052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.998 [2024-10-01 13:52:38.548069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.998 [2024-10-01 13:52:38.548083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.998 [2024-10-01 13:52:38.548114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.998 [2024-10-01 13:52:38.555613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.998 [2024-10-01 13:52:38.556432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.998 [2024-10-01 13:52:38.556476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.998 [2024-10-01 13:52:38.556497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.998 [2024-10-01 13:52:38.556635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.556675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.556694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.556708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.556741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.557901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.558024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.558063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.999 [2024-10-01 13:52:38.558083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.559322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.560129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.560168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.560186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.560505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.566710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.566843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.566884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.999 [2024-10-01 13:52:38.566904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.567499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.567683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.567718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.567736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.567845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.567995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.568099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.568130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.999 [2024-10-01 13:52:38.568148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.568180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.568212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.568229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.568265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.569489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.577187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.577307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.577339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.999 [2024-10-01 13:52:38.577358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.577392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.577424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.577442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.577458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.577489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.578073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.578731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.578773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.999 [2024-10-01 13:52:38.578793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.578973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.579090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.579118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.579136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.579177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.587461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.587580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.587613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.999 [2024-10-01 13:52:38.587632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.587665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.587696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.587714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.587729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.587761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.590293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.590442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.590509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.999 [2024-10-01 13:52:38.590532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.590580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.590613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.590631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.590645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.590677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.597562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.597681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.597714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.999 [2024-10-01 13:52:38.597733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.597782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.597819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.597837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.597852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.597883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.601236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.601346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.601378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.999 [2024-10-01 13:52:38.601396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.601429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.601461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.601478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.601493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.601524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.607670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.607811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.607851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:34.999 [2024-10-01 13:52:38.607872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:34.999 [2024-10-01 13:52:38.607907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:34.999 [2024-10-01 13:52:38.607994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.999 [2024-10-01 13:52:38.608014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.999 [2024-10-01 13:52:38.608029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.999 [2024-10-01 13:52:38.608061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.999 [2024-10-01 13:52:38.611682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.999 [2024-10-01 13:52:38.611799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.999 [2024-10-01 13:52:38.611838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:34.999 [2024-10-01 13:52:38.611858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.612466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.612659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.612694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.612712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.612840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.617782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.617926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.617960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.000 [2024-10-01 13:52:38.617980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.618013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.618046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.618064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.618080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.618112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.622353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.622478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.622510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.000 [2024-10-01 13:52:38.622529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.622588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.622621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.622638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.622652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.622727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.627884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.628038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.628070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.000 [2024-10-01 13:52:38.628088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.628121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.628152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.628170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.628185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.628216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.632751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.632885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.632932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.000 [2024-10-01 13:52:38.632953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.632987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.633018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.633036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.633050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.633081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.637992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.638112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.638144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.000 [2024-10-01 13:52:38.638163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.638196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.638227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.638244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.638257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.638289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.642956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.643071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.643104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.000 [2024-10-01 13:52:38.643146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.643181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.643213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.643230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.643244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.643274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.648089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.648204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.648236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.000 [2024-10-01 13:52:38.648254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.648286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.648318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.648335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.648349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.648379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.653050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.653158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.653190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.000 [2024-10-01 13:52:38.653208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.653240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.653271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.653288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.653302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.653332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.658183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.658299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.658330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.000 [2024-10-01 13:52:38.658349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.658382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.658413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.658450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.658466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.658498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.663137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.663256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.663287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.000 [2024-10-01 13:52:38.663305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.663338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.663369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.663386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.663401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.664629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.668280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.668396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.668426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.000 [2024-10-01 13:52:38.668445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.668478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.669072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.000 [2024-10-01 13:52:38.669111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.000 [2024-10-01 13:52:38.669130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.000 [2024-10-01 13:52:38.669299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.000 [2024-10-01 13:52:38.673232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.000 [2024-10-01 13:52:38.673360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.000 [2024-10-01 13:52:38.673393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.000 [2024-10-01 13:52:38.673411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.000 [2024-10-01 13:52:38.673444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.000 [2024-10-01 13:52:38.673475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.673493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.673507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.673537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.678370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.679725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.679772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.001 [2024-10-01 13:52:38.679793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.680580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.680941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.680985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.681003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.681077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.683327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.683442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.683474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.001 [2024-10-01 13:52:38.683492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.683525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.684119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.684158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.684177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.684357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.688491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.688604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.688635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.001 [2024-10-01 13:52:38.688652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.689877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.690119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.690148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.690164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.691108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.694610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.695470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.695514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.001 [2024-10-01 13:52:38.695536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.695881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.695986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.696013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.696028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.696061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.699227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.699348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.699380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.001 [2024-10-01 13:52:38.699397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.699431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.699462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.699479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.699493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.699523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.705786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.706612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.706657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.001 [2024-10-01 13:52:38.706677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.706774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.706812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.706830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.706845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.706875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.710356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.710702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.710746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.001 [2024-10-01 13:52:38.710767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.710837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.710875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.710893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.710947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.710983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.716983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.717118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.717152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.001 [2024-10-01 13:52:38.717171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.717757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.717965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.717996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.718012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.718145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.721635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.721753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.721784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.001 [2024-10-01 13:52:38.721802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.721835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.721866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.721882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.721897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.721945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.727525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.727650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.727682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.001 [2024-10-01 13:52:38.727701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.727734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.727765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.727783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.727797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.727828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.001 [2024-10-01 13:52:38.731993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.001 [2024-10-01 13:52:38.732110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.001 [2024-10-01 13:52:38.732171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.001 [2024-10-01 13:52:38.732191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.001 [2024-10-01 13:52:38.732772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.001 [2024-10-01 13:52:38.732990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.001 [2024-10-01 13:52:38.733018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.001 [2024-10-01 13:52:38.733033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.001 [2024-10-01 13:52:38.733158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.737806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.737945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.737978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.002 [2024-10-01 13:52:38.737996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.738030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.738060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.738078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.738092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.738122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.742548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.742662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.742693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.002 [2024-10-01 13:52:38.742712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.742744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.742775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.742791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.742806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.742836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.747959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.748073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.748105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.002 [2024-10-01 13:52:38.748123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.748156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.748227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.748250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.748265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.748296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.752793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.752924] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.752957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.002 [2024-10-01 13:52:38.752975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.753008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.753051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.753071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.753085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.753116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.758062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.758175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.758206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.002 [2024-10-01 13:52:38.758224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.758256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.758286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.758303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.758317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.758347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.762885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.763009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.763041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.002 [2024-10-01 13:52:38.763059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.763091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.763142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.763161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.763175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.763222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.768157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.768272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.768304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.002 [2024-10-01 13:52:38.768322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.769555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.770367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.770408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.770427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.770758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.772985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.773102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.773133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.002 [2024-10-01 13:52:38.773151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.773183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.773224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.773244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.773259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.773289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.778250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.778362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.778393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.002 [2024-10-01 13:52:38.778411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.778443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.778474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.778491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.778505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.779740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.783074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.783189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.783221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.002 [2024-10-01 13:52:38.783258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.783293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.784517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.784557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.784576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.785354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.788340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.789018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.789063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.002 [2024-10-01 13:52:38.789084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.789243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.789365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.789400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.789418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.789458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.793168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.793292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.793324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.002 [2024-10-01 13:52:38.793342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.793375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.793417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.002 [2024-10-01 13:52:38.793435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.002 [2024-10-01 13:52:38.793450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.002 [2024-10-01 13:52:38.794676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.002 [2024-10-01 13:52:38.800339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.002 [2024-10-01 13:52:38.800675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.002 [2024-10-01 13:52:38.800720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.002 [2024-10-01 13:52:38.800740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.002 [2024-10-01 13:52:38.800810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.002 [2024-10-01 13:52:38.800849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.800882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.800898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.800948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.803269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.803945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.803989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.003 [2024-10-01 13:52:38.804009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.804223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.804351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.804373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.804387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.804426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.811493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.811618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.811650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.003 [2024-10-01 13:52:38.811669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.811702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.811733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.811750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.811764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.811795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.815275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.815611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.815655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.003 [2024-10-01 13:52:38.815677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.815747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.815785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.815803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.815818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.815849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.821790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.821927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.821960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.003 [2024-10-01 13:52:38.821978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.822570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.822759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.822835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.822854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.822980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.826478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.826612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.826644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.003 [2024-10-01 13:52:38.826662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.826695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.826726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.826743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.826757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.826788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.832377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.832501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.832534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.003 [2024-10-01 13:52:38.832552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.832587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.832618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.832636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.832651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.832681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.836892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.837029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.837061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.003 [2024-10-01 13:52:38.837079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.837696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.837889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.837939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.837958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.838093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.842683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.842807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.842839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.003 [2024-10-01 13:52:38.842857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.842889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.842937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.842959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.842973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.843004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.847415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.847530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.847561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.003 [2024-10-01 13:52:38.847578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.847611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.847642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.847659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.847673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.847702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.852786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.852900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.852949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.003 [2024-10-01 13:52:38.852969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.853003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.853034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.853051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.853085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.853118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.857590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.857707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.857740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.003 [2024-10-01 13:52:38.857758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.857790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.857822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.857839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.857853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.857883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.862883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.863010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.863042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.003 [2024-10-01 13:52:38.863060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.863092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.863123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.863140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.863153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.863183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 8041.60 IOPS, 31.41 MiB/s [2024-10-01 13:52:38.870471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.871635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.871681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.003 [2024-10-01 13:52:38.871702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.872637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.872844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.003 [2024-10-01 13:52:38.872879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.003 [2024-10-01 13:52:38.872897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.003 [2024-10-01 13:52:38.873023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.003 [2024-10-01 13:52:38.875035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.003 [2024-10-01 13:52:38.875455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.003 [2024-10-01 13:52:38.875499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.003 [2024-10-01 13:52:38.875520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.003 [2024-10-01 13:52:38.875591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.003 [2024-10-01 13:52:38.875630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.875648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.875662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.875693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.881632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.881754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.881787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.004 [2024-10-01 13:52:38.881805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.882394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.882594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.882631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.882649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.882758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.886239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.886356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.886387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.004 [2024-10-01 13:52:38.886405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.886438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.886469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.886485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.886499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.886530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.892150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.892269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.892302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.004 [2024-10-01 13:52:38.892319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.892352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.892403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.892421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.892435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.892466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.896605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.896720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.896751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.004 [2024-10-01 13:52:38.896769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.897358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.897564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.897601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.897620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.897727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.902379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.902494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.902525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.004 [2024-10-01 13:52:38.902558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.902593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.902623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.902639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.902653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.902684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.907159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.907276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.907308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.004 [2024-10-01 13:52:38.907325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.907358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.907400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.907419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.907434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.907483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.912559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.912680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.912712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.004 [2024-10-01 13:52:38.912730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.912763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.912794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.912812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.912826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.912857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.917529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.917660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.917692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.004 [2024-10-01 13:52:38.917710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.917744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.917775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.917793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.917808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.917838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.922655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.922776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.922808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.004 [2024-10-01 13:52:38.922826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.922859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.922890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.922907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.922940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.922973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.927753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.927869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.927901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.004 [2024-10-01 13:52:38.927968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.004 [2024-10-01 13:52:38.928005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.004 [2024-10-01 13:52:38.928036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.004 [2024-10-01 13:52:38.928054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.004 [2024-10-01 13:52:38.928068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.004 [2024-10-01 13:52:38.928099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.004 [2024-10-01 13:52:38.932747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.004 [2024-10-01 13:52:38.932864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.004 [2024-10-01 13:52:38.932896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.004 [2024-10-01 13:52:38.932930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.932966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.932998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.933014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.933028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.933058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.937842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.937971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.938003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.005 [2024-10-01 13:52:38.938020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.938053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.938084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.938101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.938115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.938145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.942863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.942991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.943022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.005 [2024-10-01 13:52:38.943039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.943071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.943102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.943137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.943152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.943184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.947951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.948066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.948098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.005 [2024-10-01 13:52:38.948115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.948150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.949361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.949400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.949419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.950179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.952966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.953077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.953108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.005 [2024-10-01 13:52:38.953126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.953158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.953189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.953206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.953220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.953788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.958045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.958158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.958189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.005 [2024-10-01 13:52:38.958207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.958238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.958269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.958286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.958300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.959522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.963058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.964359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.964405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.005 [2024-10-01 13:52:38.964426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.965199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.965538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.965577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.965595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.965667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.968136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.968789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.968833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.005 [2024-10-01 13:52:38.968854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.969048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.969166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.969187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.969202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.969241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.973148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.974471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.974515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.005 [2024-10-01 13:52:38.974546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.974761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.975674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.975712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.975731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.976475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.980016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.980365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.980409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.005 [2024-10-01 13:52:38.980430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.980528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.980570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.980588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.980603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.980634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.983824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.983970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.984002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.005 [2024-10-01 13:52:38.984020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.984056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.984088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.984106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.984120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.984151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.991202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.991326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.991358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.005 [2024-10-01 13:52:38.991376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.991409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.991441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.991457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.991471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.991502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:38.994906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:38.995256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:38.995300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.005 [2024-10-01 13:52:38.995321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:38.995391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:38.995429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:38.995446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:38.995480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:38.995515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:39.001438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:39.001552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:39.001583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.005 [2024-10-01 13:52:39.001602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:39.002192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:39.002380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:39.002416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.005 [2024-10-01 13:52:39.002435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.005 [2024-10-01 13:52:39.002556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.005 [2024-10-01 13:52:39.005997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.005 [2024-10-01 13:52:39.006110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.005 [2024-10-01 13:52:39.006141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.005 [2024-10-01 13:52:39.006159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.005 [2024-10-01 13:52:39.006191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.005 [2024-10-01 13:52:39.006222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.005 [2024-10-01 13:52:39.006239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.006253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.006284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.011933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.012047] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.012078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.012096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.012128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.012158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.012175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.012189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.012219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.016315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.016449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.016480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.006 [2024-10-01 13:52:39.016498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.017095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.017283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.017319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.017338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.017447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.022098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.022223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.022254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.022271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.022303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.022334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.022351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.022366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.022395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.026775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.026891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.026937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.006 [2024-10-01 13:52:39.026956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.026989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.027019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.027036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.027050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.027081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.032192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.032306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.032337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.032355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.032403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.032456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.032475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.032489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.032520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.037028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.037143] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.037175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.006 [2024-10-01 13:52:39.037193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.037225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.037256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.037273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.037288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.037319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.042285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.042398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.042429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.042447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.042479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.042509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.042527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.042555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.042589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.047129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.047256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.047288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.006 [2024-10-01 13:52:39.047313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.047346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.047377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.047394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.047408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.047460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.052376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.052498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.052530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.052549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.053783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.054613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.054655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.054675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.055016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.057236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.057353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.057385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.006 [2024-10-01 13:52:39.057404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.057437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.057468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.057485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.057500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.057531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.062475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.062618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.062650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.062669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.062703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.062735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.062753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.062768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.062800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.067328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.067443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.067475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.006 [2024-10-01 13:52:39.067521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.067557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.067588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.067606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.067620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.067651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.072591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.072706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.072738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.072756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.072788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.072819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.072836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.072850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.073442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.077418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.077533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.077565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.006 [2024-10-01 13:52:39.077583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.077616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.077647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.077664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.077678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.077709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.082688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.082803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.006 [2024-10-01 13:52:39.082835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.006 [2024-10-01 13:52:39.082853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.006 [2024-10-01 13:52:39.084076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.006 [2024-10-01 13:52:39.084857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.006 [2024-10-01 13:52:39.084930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.006 [2024-10-01 13:52:39.084953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.006 [2024-10-01 13:52:39.085276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.006 [2024-10-01 13:52:39.087510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.006 [2024-10-01 13:52:39.087625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.087656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.007 [2024-10-01 13:52:39.087675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.007 [2024-10-01 13:52:39.087708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.007 [2024-10-01 13:52:39.087739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.007 [2024-10-01 13:52:39.087756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.007 [2024-10-01 13:52:39.087771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.007 [2024-10-01 13:52:39.087802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.007 [2024-10-01 13:52:39.092778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.007 [2024-10-01 13:52:39.092892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.092941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.007 [2024-10-01 13:52:39.092961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.007 [2024-10-01 13:52:39.092994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.007 [2024-10-01 13:52:39.093025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.007 [2024-10-01 13:52:39.093042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.007 [2024-10-01 13:52:39.093057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.007 [2024-10-01 13:52:39.094277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.007 [2024-10-01 13:52:39.097600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.007 [2024-10-01 13:52:39.097713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.097745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.007 [2024-10-01 13:52:39.097763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.007 [2024-10-01 13:52:39.097795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.007 [2024-10-01 13:52:39.097826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.007 [2024-10-01 13:52:39.097843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.007 [2024-10-01 13:52:39.097856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.007 [2024-10-01 13:52:39.099099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.007 [2024-10-01 13:52:39.103540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.007 [2024-10-01 13:52:39.103732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.103765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.007 [2024-10-01 13:52:39.103783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.007 [2024-10-01 13:52:39.103824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.007 [2024-10-01 13:52:39.103859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.007 [2024-10-01 13:52:39.103876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.007 [2024-10-01 13:52:39.103891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.007 [2024-10-01 13:52:39.103938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.007 [2024-10-01 13:52:39.107687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.007 [2024-10-01 13:52:39.107801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.107832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.007 [2024-10-01 13:52:39.107850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.007 [2024-10-01 13:52:39.109076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.007 [2024-10-01 13:52:39.109324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.007 [2024-10-01 13:52:39.109354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.007 [2024-10-01 13:52:39.109369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.007 [2024-10-01 13:52:39.110268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.007 [2024-10-01 13:52:39.114930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.007 [2024-10-01 13:52:39.115087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.115120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.007 [2024-10-01 13:52:39.115138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.007 [2024-10-01 13:52:39.115172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.007 [2024-10-01 13:52:39.115204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.007 [2024-10-01 13:52:39.115221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.007 [2024-10-01 13:52:39.115236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.007 [2024-10-01 13:52:39.115266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.007 [2024-10-01 13:52:39.118575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.007 [2024-10-01 13:52:39.118705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.118737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.007 [2024-10-01 13:52:39.118755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.007 [2024-10-01 13:52:39.118813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.007 [2024-10-01 13:52:39.118846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.007 [2024-10-01 13:52:39.118864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.007 [2024-10-01 13:52:39.118878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.007 [2024-10-01 13:52:39.118923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.007 [2024-10-01 13:52:39.126004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.007 [2024-10-01 13:52:39.126149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.007 [2024-10-01 13:52:39.126182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.008 [2024-10-01 13:52:39.126200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.126236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.126268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.126285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.126301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.126333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.129789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.130158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.130204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.008 [2024-10-01 13:52:39.130225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.130298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.130338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.130356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.130371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.130423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.136415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.136561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.136593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.008 [2024-10-01 13:52:39.136611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.137220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.137411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.137440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.137488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.137602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.141123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.141241] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.141273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.008 [2024-10-01 13:52:39.141290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.141323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.141354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.141372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.141386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.141417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.146952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.147068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.147099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.008 [2024-10-01 13:52:39.147118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.147150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.147183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.147201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.147223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.147253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.151407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.151522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.151553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.008 [2024-10-01 13:52:39.151571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.152168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.152368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.152410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.152429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.152538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.157213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.157354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.157386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.008 [2024-10-01 13:52:39.157404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.157436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.157467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.157484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.157498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.157528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.161961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.162109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.162141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.008 [2024-10-01 13:52:39.162159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.162191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.162233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.162253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.162268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.162299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.167340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.167456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.167487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.008 [2024-10-01 13:52:39.167505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.167537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.167569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.167586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.167600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.167631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.172170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.172287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.172319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.008 [2024-10-01 13:52:39.172337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.172381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.172437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.172456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.172470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.172501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.177439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.177554] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.177585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.008 [2024-10-01 13:52:39.177602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.177635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.177666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.177683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.177697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.177728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.182275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.182389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.182421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.008 [2024-10-01 13:52:39.182439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.182484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.182519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.182549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.182567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.182599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.187533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.187647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.187679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.008 [2024-10-01 13:52:39.187697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.188933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.189705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.189745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.189765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.190127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.192366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.192482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.008 [2024-10-01 13:52:39.192513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.008 [2024-10-01 13:52:39.192531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.008 [2024-10-01 13:52:39.192563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.008 [2024-10-01 13:52:39.192594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.008 [2024-10-01 13:52:39.192611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.008 [2024-10-01 13:52:39.192625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.008 [2024-10-01 13:52:39.192655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.008 [2024-10-01 13:52:39.197623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.008 [2024-10-01 13:52:39.197738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.197770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.009 [2024-10-01 13:52:39.197787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.199030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.199284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.199323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.199341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.200247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.202459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.202586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.202618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.009 [2024-10-01 13:52:39.202636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.203857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.204661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.204701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.204721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.205055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.208404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.208528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.208559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.009 [2024-10-01 13:52:39.208599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.208633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.208664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.208682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.208696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.208727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.212545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.212659] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.212690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.009 [2024-10-01 13:52:39.212707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.213933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.214166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.214203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.214221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.215147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.219569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.219924] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.219968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.009 [2024-10-01 13:52:39.219988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.220060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.220099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.220117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.220131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.220161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.222638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.223319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.223366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.009 [2024-10-01 13:52:39.223387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.223553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.223667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.223715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.223733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.223776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.230878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.231022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.231054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.009 [2024-10-01 13:52:39.231073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.231107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.231139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.231156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.231170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.231202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.234707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.235073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.235117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.009 [2024-10-01 13:52:39.235138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.235210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.235249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.235267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.235282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.235314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.241362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.241516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.241548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.009 [2024-10-01 13:52:39.241567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.242174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.242368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.242396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.242413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.242572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.246055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.246179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.246211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.009 [2024-10-01 13:52:39.246229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.246262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.246294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.246311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.246325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.246356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.252055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.252179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.252211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.009 [2024-10-01 13:52:39.252229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.252262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.252293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.252311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.252326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.252358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.256516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.256632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.256663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.009 [2024-10-01 13:52:39.256682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.257288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.257476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.257503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.257518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.257627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.262323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.262439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.262471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.009 [2024-10-01 13:52:39.262516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.262578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.262614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.262632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.262646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.262676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.267043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.009 [2024-10-01 13:52:39.267156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.009 [2024-10-01 13:52:39.267186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.009 [2024-10-01 13:52:39.267204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.009 [2024-10-01 13:52:39.267236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.009 [2024-10-01 13:52:39.267267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.009 [2024-10-01 13:52:39.267284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.009 [2024-10-01 13:52:39.267298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.009 [2024-10-01 13:52:39.267330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.009 [2024-10-01 13:52:39.272482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.272595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.272627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.010 [2024-10-01 13:52:39.272645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.272693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.272727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.272745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.272759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.272789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.277299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.277414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.277445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.010 [2024-10-01 13:52:39.277463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.277495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.277526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.277543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.277574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.277607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.282591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.282703] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.282734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.010 [2024-10-01 13:52:39.282752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.282794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.282824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.282841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.282855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.282886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.287418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.287531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.287563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.010 [2024-10-01 13:52:39.287581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.287614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.287645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.287662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.287676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.287706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.292678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.292791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.292822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.010 [2024-10-01 13:52:39.292840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.294084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.294904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.294971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.294991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.295306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.297508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.297649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.297682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.010 [2024-10-01 13:52:39.297700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.297732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.297764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.297780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.297794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.297825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.302776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.302890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.302939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.010 [2024-10-01 13:52:39.302959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.302992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.303023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.303040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.303054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.304275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.307617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.307731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.307763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.010 [2024-10-01 13:52:39.307781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.307813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.309057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.309095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.309114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.309857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.312869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.312994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.313026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.010 [2024-10-01 13:52:39.313043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.313637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.313827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.313864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.313882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.314004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.317707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.317821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.317852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.010 [2024-10-01 13:52:39.317870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.317902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.317951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.317970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.317984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.318015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.324866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.325281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.325327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.010 [2024-10-01 13:52:39.325347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.325417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.325456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.325474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.325488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.325519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.327798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.327923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.327955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.010 [2024-10-01 13:52:39.327973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.328006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.328574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.328612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.328631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.328832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.336112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.336230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.336261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.010 [2024-10-01 13:52:39.336279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.336311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.336343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.336360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.336374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.010 [2024-10-01 13:52:39.336405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.010 [2024-10-01 13:52:39.339851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.010 [2024-10-01 13:52:39.340201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.010 [2024-10-01 13:52:39.340257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.010 [2024-10-01 13:52:39.340278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.010 [2024-10-01 13:52:39.340348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.010 [2024-10-01 13:52:39.340387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.010 [2024-10-01 13:52:39.340405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.010 [2024-10-01 13:52:39.340419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.340450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.346385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.346499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.346530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.011 [2024-10-01 13:52:39.346569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.347159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.347346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.347374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.347389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.347496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.351037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.351150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.351207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.011 [2024-10-01 13:52:39.351227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.351261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.351292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.351309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.351324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.351355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.356957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.357070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.357102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.011 [2024-10-01 13:52:39.357119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.357152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.357190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.357208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.357222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.357252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.361407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.361522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.361553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.011 [2024-10-01 13:52:39.361571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.362166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.362352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.362388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.362407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.362526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.367256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.367432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.367465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.011 [2024-10-01 13:52:39.367483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.367524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.367575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.367595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.367610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.367640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.371962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.372075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.372107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.011 [2024-10-01 13:52:39.372125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.372157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.372188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.372205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.372219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.372249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.377348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.377469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.377500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.011 [2024-10-01 13:52:39.377518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.377550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.377580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.377597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.377611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.377640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.382138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.382253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.382284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.011 [2024-10-01 13:52:39.382302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.382333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.382364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.382381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.382395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.382426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.387451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.387563] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.387595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.011 [2024-10-01 13:52:39.387613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.387645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.387676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.387693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.387707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.387737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.392251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.392366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.392398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.011 [2024-10-01 13:52:39.392416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.392448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.392496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.392517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.392532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.392563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.397545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.398859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.398906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.011 [2024-10-01 13:52:39.398941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.399682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.400037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.400076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.400095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.400167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.402346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.402455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.402486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.011 [2024-10-01 13:52:39.402524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.402573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.402618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.011 [2024-10-01 13:52:39.402637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.011 [2024-10-01 13:52:39.402651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.011 [2024-10-01 13:52:39.402681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.011 [2024-10-01 13:52:39.407640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.011 [2024-10-01 13:52:39.407756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.011 [2024-10-01 13:52:39.407787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.011 [2024-10-01 13:52:39.407805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.011 [2024-10-01 13:52:39.409035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.011 [2024-10-01 13:52:39.409291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.409330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.409349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.410252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.412434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.412546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.412577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.012 [2024-10-01 13:52:39.412594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.413810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.414637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.414678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.414697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.415035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.418480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.418623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.418655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.012 [2024-10-01 13:52:39.418673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.418706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.418737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.418754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.418791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.418823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.422522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.422653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.422684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.012 [2024-10-01 13:52:39.422702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.422734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.422765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.422781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.422795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.424029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.429623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.429981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.430024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.012 [2024-10-01 13:52:39.430045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.430115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.430153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.430171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.430185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.430217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.432624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.433296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.433341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.012 [2024-10-01 13:52:39.433362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.433521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.433636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.433665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.433682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.433723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.440847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.441002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.441035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.012 [2024-10-01 13:52:39.441053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.441086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.441117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.441134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.441148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.441178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.444822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.444990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.445023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.012 [2024-10-01 13:52:39.445041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.445075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.445106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.445123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.445137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.445168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.451141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.451255] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.451298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.012 [2024-10-01 13:52:39.451318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.451893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.452096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.452134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.452153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.452261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.455731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.455846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.455877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.012 [2024-10-01 13:52:39.455895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.455964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.455998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.456015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.456029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.456060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.461658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.461773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.461805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.012 [2024-10-01 13:52:39.461822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.461855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.461886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.461903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.461936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.461969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.466094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.466209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.466240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.012 [2024-10-01 13:52:39.466258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.466862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.467065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.467102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.467120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.467230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.471940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.472054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.472085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.012 [2024-10-01 13:52:39.472103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.472136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.472167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.472185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.472222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.472257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.476694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.476809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.476840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.012 [2024-10-01 13:52:39.476858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.476890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.012 [2024-10-01 13:52:39.476937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.012 [2024-10-01 13:52:39.476958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.012 [2024-10-01 13:52:39.476973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.012 [2024-10-01 13:52:39.477003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.012 [2024-10-01 13:52:39.482146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.012 [2024-10-01 13:52:39.482262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.012 [2024-10-01 13:52:39.482293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.012 [2024-10-01 13:52:39.482311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.012 [2024-10-01 13:52:39.482357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.482392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.482410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.482424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.482454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.487017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.487137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.487180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.013 [2024-10-01 13:52:39.487198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.487230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.487261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.487278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.487293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.487324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.492233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.492354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.492403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.013 [2024-10-01 13:52:39.492422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.492455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.492487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.492504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.492519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.492549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.497222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.497339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.497370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.013 [2024-10-01 13:52:39.497396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.497428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.497459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.497475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.497490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.497521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.502334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.502479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.502512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.013 [2024-10-01 13:52:39.502530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.502581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.502613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.502630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.502644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.503864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.507312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.507424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.507455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.013 [2024-10-01 13:52:39.507473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.507505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.507555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.507574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.507588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.507619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.512452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.512564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.512595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.013 [2024-10-01 13:52:39.512614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.512646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.512678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.512695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.512709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.513945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.517402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.518711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.518756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.013 [2024-10-01 13:52:39.518777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.519533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.519873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.519927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.519949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.520022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.523308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.523442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.523473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.013 [2024-10-01 13:52:39.523491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.523523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.523554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.523572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.523588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.523618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.527490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.527606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.527638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.013 [2024-10-01 13:52:39.527656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.528893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.529166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.529197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.529212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.530133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.534499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.534844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.534888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.013 [2024-10-01 13:52:39.534922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.534998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.535037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.535055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.535079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.535110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.538306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.538425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.538456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.013 [2024-10-01 13:52:39.538474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.538507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.538552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.538572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.538586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.538617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.545570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.545685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.545716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.013 [2024-10-01 13:52:39.545761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.545798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.545829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.545846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.545860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.545891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.549567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.549721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.549753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.013 [2024-10-01 13:52:39.549771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.013 [2024-10-01 13:52:39.549805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.013 [2024-10-01 13:52:39.549836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.013 [2024-10-01 13:52:39.549853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.013 [2024-10-01 13:52:39.549868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.013 [2024-10-01 13:52:39.549899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.013 [2024-10-01 13:52:39.555841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.013 [2024-10-01 13:52:39.555973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.013 [2024-10-01 13:52:39.556005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.013 [2024-10-01 13:52:39.556030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.556606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.556791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.556819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.556834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.556959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.560450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.560566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.560597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.014 [2024-10-01 13:52:39.560615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.560647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.560678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.560718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.560734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.560766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.566296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.566411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.566443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.014 [2024-10-01 13:52:39.566461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.566494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.566525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.566555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.566571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.566602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.570755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.570870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.570900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.014 [2024-10-01 13:52:39.570934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.571515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.571708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.571736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.571752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.571862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.576501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.576616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.576648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.014 [2024-10-01 13:52:39.576666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.576699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.576730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.576747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.576761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.576791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.581204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.581351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.581384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.014 [2024-10-01 13:52:39.581402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.581435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.581466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.581483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.581497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.581528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.586606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.586727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.586759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.014 [2024-10-01 13:52:39.586777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.586810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.586840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.586858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.586872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.586903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.591467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.591594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.591625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.014 [2024-10-01 13:52:39.591644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.591675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.591706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.591723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.591738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.591768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.596702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.596829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.596860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.014 [2024-10-01 13:52:39.596877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.596952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.596988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.597005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.597019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.597063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.601697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.601814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.601851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.014 [2024-10-01 13:52:39.601869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.601935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.601973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.601991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.602005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.602037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.606800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.606938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.606977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.014 [2024-10-01 13:52:39.606995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.607029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.607074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.607093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.607109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.607140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.611793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.611947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.611982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.014 [2024-10-01 13:52:39.612001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.612036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.612068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.612085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.612137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.612173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.616970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.014 [2024-10-01 13:52:39.617091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.014 [2024-10-01 13:52:39.617123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.014 [2024-10-01 13:52:39.617141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.014 [2024-10-01 13:52:39.617188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.014 [2024-10-01 13:52:39.617222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.014 [2024-10-01 13:52:39.617240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.014 [2024-10-01 13:52:39.617255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.014 [2024-10-01 13:52:39.617285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.014 [2024-10-01 13:52:39.621895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.622028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.622060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.015 [2024-10-01 13:52:39.622078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.622111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.622143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.622159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.622174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.622205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.627078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.627193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.627225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.015 [2024-10-01 13:52:39.627242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.627275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.627305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.627322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.627337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.627368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.632007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.632123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.632181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.015 [2024-10-01 13:52:39.632201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.632251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.632287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.632304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.632318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.632350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.637173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.637289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.637321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.015 [2024-10-01 13:52:39.637339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.638585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.639380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.639420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.639439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.639758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.642099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.642210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.642240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.015 [2024-10-01 13:52:39.642258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.642290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.642321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.642338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.642352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.642383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.647267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.647381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.647412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.015 [2024-10-01 13:52:39.647430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.647463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.648717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.648758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.648777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.649026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.652189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.652304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.652335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.015 [2024-10-01 13:52:39.652352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.653576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.654388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.654429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.654448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.654783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.658065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.658261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.658293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.015 [2024-10-01 13:52:39.658312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.658353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.658385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.658403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.658418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.658448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.662284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.662397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.662429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.015 [2024-10-01 13:52:39.662447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.663697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.663951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.663988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.664007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.664900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.669280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.669626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.669671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.015 [2024-10-01 13:52:39.669692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.669763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.669803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.669821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.669846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.669877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.672375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.673051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.673092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.015 [2024-10-01 13:52:39.673121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.673284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.673401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.673422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.673436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.673475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.680759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.680885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.680932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.015 [2024-10-01 13:52:39.680954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.680988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.681021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.681038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.681053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.681085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.683670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.684546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.684591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.015 [2024-10-01 13:52:39.684642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.684983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.685077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.685108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.685125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.015 [2024-10-01 13:52:39.685159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.015 [2024-10-01 13:52:39.691208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.015 [2024-10-01 13:52:39.691331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.015 [2024-10-01 13:52:39.691363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.015 [2024-10-01 13:52:39.691381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.015 [2024-10-01 13:52:39.691979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.015 [2024-10-01 13:52:39.692168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.015 [2024-10-01 13:52:39.692205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.015 [2024-10-01 13:52:39.692224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.692352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.695063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.695896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.695950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.016 [2024-10-01 13:52:39.695971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.696073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.696112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.696129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.696144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.696175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.701860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.702031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.702066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.016 [2024-10-01 13:52:39.702085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.702120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.702152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.702206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.702223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.702256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.706398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.706514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.706560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.016 [2024-10-01 13:52:39.706581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.707192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.707408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.707437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.707452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.707564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.712306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.712440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.712472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.016 [2024-10-01 13:52:39.712490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.712523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.712555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.712572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.712586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.712616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.717032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.717148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.717180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.016 [2024-10-01 13:52:39.717198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.717231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.717275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.717294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.717309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.717340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.722416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.722573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.722606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.016 [2024-10-01 13:52:39.722624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.722673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.722709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.722727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.722741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.722772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.727225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.727345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.727376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.016 [2024-10-01 13:52:39.727395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.727439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.727472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.727489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.727503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.727534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.732540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.732654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.732685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.016 [2024-10-01 13:52:39.732703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.732735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.732766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.732783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.732798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.733384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.737333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.737450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.737482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.016 [2024-10-01 13:52:39.737500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.737555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.737587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.737604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.737618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.737648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.742632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.743972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.744018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.016 [2024-10-01 13:52:39.744040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.744810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.745171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.745211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.745231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.745304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.747423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.747538] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.747570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.016 [2024-10-01 13:52:39.747588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.747620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.747651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.747668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.747683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.747714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.752740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.752870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.752902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.016 [2024-10-01 13:52:39.752938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.754176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.754416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.754453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.754495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.755427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.757510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.757625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.757656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.016 [2024-10-01 13:52:39.757674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.757707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.758963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.016 [2024-10-01 13:52:39.759003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.016 [2024-10-01 13:52:39.759022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.016 [2024-10-01 13:52:39.759791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.016 [2024-10-01 13:52:39.762851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.016 [2024-10-01 13:52:39.763524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.016 [2024-10-01 13:52:39.763569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.016 [2024-10-01 13:52:39.763591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.016 [2024-10-01 13:52:39.763758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.016 [2024-10-01 13:52:39.763877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.763898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.763927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.763971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.767595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.767711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.767743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.017 [2024-10-01 13:52:39.767761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.767794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.767826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.767844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.767858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.767889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.774904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.775343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.775417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.017 [2024-10-01 13:52:39.775441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.775515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.775553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.775572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.775587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.775618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.777690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.777802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.777833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.017 [2024-10-01 13:52:39.777851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.777883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.777931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.777952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.777966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.778558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.786123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.786243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.786275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.017 [2024-10-01 13:52:39.786292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.786324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.786355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.786372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.786386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.786418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.787778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.787887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.787932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.017 [2024-10-01 13:52:39.787953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.789174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.789990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.790029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.790048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.790372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.796392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.796508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.796540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.017 [2024-10-01 13:52:39.796558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.797151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.797338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.797374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.797392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.797501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.797866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.799176] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.799221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.017 [2024-10-01 13:52:39.799241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.799467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.800376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.800415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.800434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.801165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.806817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.806946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.806979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.017 [2024-10-01 13:52:39.806997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.807030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.807061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.807079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.807093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.807123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.808521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.808641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.808672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.017 [2024-10-01 13:52:39.808689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.808722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.808753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.808770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.808785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.808815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.817014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.817144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.817177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.017 [2024-10-01 13:52:39.817196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.817229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.817260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.817277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.817291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.817322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.819807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.819974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.820006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.017 [2024-10-01 13:52:39.820024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.820058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.820089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.820106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.820121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.820152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.827129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.827253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.827285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.017 [2024-10-01 13:52:39.827336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.827372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.827405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.827422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.827437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.827468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.830850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.830982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.831013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.017 [2024-10-01 13:52:39.831031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.831064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.017 [2024-10-01 13:52:39.831095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.017 [2024-10-01 13:52:39.831113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.017 [2024-10-01 13:52:39.831127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.017 [2024-10-01 13:52:39.831157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.017 [2024-10-01 13:52:39.837226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.017 [2024-10-01 13:52:39.837345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.017 [2024-10-01 13:52:39.837377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.017 [2024-10-01 13:52:39.837395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.017 [2024-10-01 13:52:39.837429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.837460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.837477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.837492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.837523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.841175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.841290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.841321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.018 [2024-10-01 13:52:39.841339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.841934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.842121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.842183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.842203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.842315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.847317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.847432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.847464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.018 [2024-10-01 13:52:39.847482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.847515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.847546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.847563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.847578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.848805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.851681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.851810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.851842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.018 [2024-10-01 13:52:39.851860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.851892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.851943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.851964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.851978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.852009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.857408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.857521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.857553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.018 [2024-10-01 13:52:39.857571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.857604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.857647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.857668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.857682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.858923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.861889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.862037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.862069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.018 [2024-10-01 13:52:39.862087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.862120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.862161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.862177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.862191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.862222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 8086.55 IOPS, 31.59 MiB/s [2024-10-01 13:52:39.869829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.871466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.871513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.018 [2024-10-01 13:52:39.871535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.872242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.872459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.872503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.872522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.872633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.872660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.872753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.872783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.018 [2024-10-01 13:52:39.872801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.874038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.874285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.874328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.874343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.875260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.880051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.880167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.880199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.018 [2024-10-01 13:52:39.880217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.880287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.880320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.880337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.880352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.880383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.882726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.883402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.883446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.018 [2024-10-01 13:52:39.883467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.883657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.883774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.883804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.883822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.883862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.890907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.891037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.891069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.018 [2024-10-01 13:52:39.891087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.891120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.891151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.891168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.891182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.891212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.894703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.895066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.895110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.018 [2024-10-01 13:52:39.895131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.895201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.895239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.895256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.895287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.895321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.901359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.901474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.901506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.018 [2024-10-01 13:52:39.901523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.018 [2024-10-01 13:52:39.902120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.018 [2024-10-01 13:52:39.902306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.018 [2024-10-01 13:52:39.902334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.018 [2024-10-01 13:52:39.902349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.018 [2024-10-01 13:52:39.902455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.018 [2024-10-01 13:52:39.906016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.018 [2024-10-01 13:52:39.906130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.018 [2024-10-01 13:52:39.906162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.019 [2024-10-01 13:52:39.906191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.906223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.906254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.906271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.906285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.906315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.911985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.912106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.912138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.019 [2024-10-01 13:52:39.912155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.912188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.912219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.912237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.912251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.912282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.916443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.916556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.916607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.019 [2024-10-01 13:52:39.916627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.917217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.917404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.917440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.917458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.917566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.922312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.922436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.922467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.019 [2024-10-01 13:52:39.922485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.922518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.922560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.922581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.922595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.922626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.927081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.927196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.927227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.019 [2024-10-01 13:52:39.927245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.927277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.927308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.927324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.927338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.927368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.932462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.932576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.932608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.019 [2024-10-01 13:52:39.932625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.932657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.932713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.932731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.932745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.932775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.937293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.937407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.937439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.019 [2024-10-01 13:52:39.937456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.937489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.937520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.937536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.937550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.937580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.942562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.942676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.942708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.019 [2024-10-01 13:52:39.942725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.942757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.942788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.942804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.942818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.942848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.947383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.947497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.947529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.019 [2024-10-01 13:52:39.947546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.947594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.947629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.947646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.947660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.947710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.952653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.952770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.952802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.019 [2024-10-01 13:52:39.952820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.954045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.954821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.954861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.954879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.955213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.957474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.957584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.957615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.019 [2024-10-01 13:52:39.957633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.957665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.957695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.957712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.957726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.958335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.962743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.964051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.964097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.019 [2024-10-01 13:52:39.964118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.964343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.965252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.965291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.965310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.966039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.967564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.968863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.968908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.019 [2024-10-01 13:52:39.968960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.969725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.970080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.970119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.970137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.970209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.973460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.973582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.019 [2024-10-01 13:52:39.973613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.019 [2024-10-01 13:52:39.973630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.019 [2024-10-01 13:52:39.973662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.019 [2024-10-01 13:52:39.973694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.019 [2024-10-01 13:52:39.973711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.019 [2024-10-01 13:52:39.973727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.019 [2024-10-01 13:52:39.973757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.019 [2024-10-01 13:52:39.977652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.019 [2024-10-01 13:52:39.977764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:39.977795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.020 [2024-10-01 13:52:39.977813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:39.979046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:39.979279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:39.979316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:39.979334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:39.980242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:39.984522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:39.984858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:39.984902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.020 [2024-10-01 13:52:39.984939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:39.985012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:39.985050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:39.985085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:39.985101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:39.985133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:39.988328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:39.988452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:39.988484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.020 [2024-10-01 13:52:39.988501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:39.988534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:39.988565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:39.988581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:39.988595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:39.988625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:39.995619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:39.995735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:39.995767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.020 [2024-10-01 13:52:39.995784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:39.995817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:39.995847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:39.995864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:39.995878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:39.995908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:39.999414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:39.999748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:39.999796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.020 [2024-10-01 13:52:39.999816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:39.999884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:39.999940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:39.999962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:39.999976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.000009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.005891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.006020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.006052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.020 [2024-10-01 13:52:40.006069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.006652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.006838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.006876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.006895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.007017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.010474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.010597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.010628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.020 [2024-10-01 13:52:40.010646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.010678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.010709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.010726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.010740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.010771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.016368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.016482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.016514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.020 [2024-10-01 13:52:40.016532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.016566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.016597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.016614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.016628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.016659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.020750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.020864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.020895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.020 [2024-10-01 13:52:40.020926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.021519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.021707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.021744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.021762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.021871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.026504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.026626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.026658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.020 [2024-10-01 13:52:40.026676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.026708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.026739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.026756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.026770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.026800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.031217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.031332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.031363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.020 [2024-10-01 13:52:40.031381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.031413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.031444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.031460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.031474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.031504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.036604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.036719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.036750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.020 [2024-10-01 13:52:40.036768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.036815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.036851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.036868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.036906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.036958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.041310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.041425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.041457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.020 [2024-10-01 13:52:40.041474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.041507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.041537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.041554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.041568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.041598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.046701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.020 [2024-10-01 13:52:40.046814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.020 [2024-10-01 13:52:40.046846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.020 [2024-10-01 13:52:40.046864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.020 [2024-10-01 13:52:40.047453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.020 [2024-10-01 13:52:40.047637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.020 [2024-10-01 13:52:40.047673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.020 [2024-10-01 13:52:40.047692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.020 [2024-10-01 13:52:40.047800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.020 [2024-10-01 13:52:40.051406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.051521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.051553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.021 [2024-10-01 13:52:40.051571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.051603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.051634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.051651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.051664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.051695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.058485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.058832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.058893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.021 [2024-10-01 13:52:40.058931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.059024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.059064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.059082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.059097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.059128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.062177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.062365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.062406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.021 [2024-10-01 13:52:40.062426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.062467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.062501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.062518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.062532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.062577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.069526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.069643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.069675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.021 [2024-10-01 13:52:40.069704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.069737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.069768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.069785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.069799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.069829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.073344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.073679] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.073723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.021 [2024-10-01 13:52:40.073744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.073814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.073869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.073889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.073903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.073951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.079924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.080048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.080079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.021 [2024-10-01 13:52:40.080098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.080679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.080868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.080924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.080946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.081058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.084649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.084767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.084798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.021 [2024-10-01 13:52:40.084817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.084850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.084882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.084899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.084929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.084964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.090592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.090738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.090770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.021 [2024-10-01 13:52:40.090788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.090822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.090860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.090878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.090893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.090978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.095109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.095226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.095257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.021 [2024-10-01 13:52:40.095275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.095859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.096049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.096077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.096092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.096211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.100899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.101034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.101066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.021 [2024-10-01 13:52:40.101083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.101116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.101147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.101165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.101179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.101209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.105612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.105725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.105757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.021 [2024-10-01 13:52:40.105775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.105807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.105838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.105854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.105869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.105899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.111005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.111117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.111148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.021 [2024-10-01 13:52:40.111183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.111234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.111271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.021 [2024-10-01 13:52:40.111288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.021 [2024-10-01 13:52:40.111303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.021 [2024-10-01 13:52:40.111333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.021 [2024-10-01 13:52:40.115865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.021 [2024-10-01 13:52:40.115993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.021 [2024-10-01 13:52:40.116025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.021 [2024-10-01 13:52:40.116043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.021 [2024-10-01 13:52:40.116075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.021 [2024-10-01 13:52:40.116106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.116123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.116137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.116168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.121093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.121206] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.121237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.022 [2024-10-01 13:52:40.121254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.121286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.121317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.121333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.121347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.121377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.125970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.126115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.126148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.022 [2024-10-01 13:52:40.126165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.126198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.126229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.126265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.126280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.126313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.131188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.131301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.131332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.022 [2024-10-01 13:52:40.131350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.132563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.133361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.133400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.133419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.133738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.136085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.136195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.136226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.022 [2024-10-01 13:52:40.136244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.136275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.136305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.136322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.136336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.136904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.141277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.141390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.141422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.022 [2024-10-01 13:52:40.141440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.142671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.142902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.142954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.142972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.143856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.146170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.147483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.147530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.022 [2024-10-01 13:52:40.147551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.148308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.148647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.148686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.148704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.148776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.152016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.152135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.152167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.022 [2024-10-01 13:52:40.152185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.152218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.152249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.152267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.152281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.152311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.156259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.157546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.157590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.022 [2024-10-01 13:52:40.157611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.157836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.158756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.158796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.158815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.159545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.163106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.163439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.163482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.022 [2024-10-01 13:52:40.163503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.163595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.163634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.163652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.163666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.163697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.166954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.167077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.167109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.022 [2024-10-01 13:52:40.167127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.167160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.167192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.167209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.167223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.167253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.174198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.174314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.174344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.022 [2024-10-01 13:52:40.174362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.174394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.174425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.174442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.174456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.174485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.177976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.178312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.178355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.022 [2024-10-01 13:52:40.178376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.178446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.178484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.178502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.178535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.178584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.184510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.184629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.184661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.022 [2024-10-01 13:52:40.184678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.185279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.022 [2024-10-01 13:52:40.185467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.022 [2024-10-01 13:52:40.185503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.022 [2024-10-01 13:52:40.185523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.022 [2024-10-01 13:52:40.185632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.022 [2024-10-01 13:52:40.189123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.022 [2024-10-01 13:52:40.189239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.022 [2024-10-01 13:52:40.189271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.022 [2024-10-01 13:52:40.189289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.022 [2024-10-01 13:52:40.189321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.189352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.189369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.189383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.189414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.195048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.195174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.195205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.023 [2024-10-01 13:52:40.195224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.195256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.195288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.195305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.195320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.195350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.199573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.199692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.199738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.023 [2024-10-01 13:52:40.199770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.200376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.200567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.200594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.200609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.200741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.205332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.205459] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.205490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.023 [2024-10-01 13:52:40.205509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.205542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.205573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.205591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.205605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.205635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.210067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.210181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.210212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.023 [2024-10-01 13:52:40.210229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.210262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.210293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.210310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.210323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.210354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.215429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.215544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.215574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.023 [2024-10-01 13:52:40.215593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.215625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.215674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.215693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.215707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.215738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.220223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.220340] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.220371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.023 [2024-10-01 13:52:40.220388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.220421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.220452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.220469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.220484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.220513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.225519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.225634] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.225665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.023 [2024-10-01 13:52:40.225683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.225715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.225746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.225763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.225777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.226374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.230319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.230461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.230493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.023 [2024-10-01 13:52:40.230512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.230572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.230608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.230626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.230641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.230690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.237583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.237935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.237979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.023 [2024-10-01 13:52:40.238000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.238071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.238109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.238128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.238142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.238173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.240431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.240542] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.240573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.023 [2024-10-01 13:52:40.240591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.241177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.241358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.241385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.241400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.241507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.248662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.248784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.248816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.023 [2024-10-01 13:52:40.248833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.248865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.248896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.248929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.248947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.248979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.252393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.252726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.252769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.023 [2024-10-01 13:52:40.252810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.252900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.252959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.252978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.252993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.253024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.258877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.259001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.259033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.023 [2024-10-01 13:52:40.259051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.023 [2024-10-01 13:52:40.259622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.023 [2024-10-01 13:52:40.259807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.023 [2024-10-01 13:52:40.259844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.023 [2024-10-01 13:52:40.259861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.023 [2024-10-01 13:52:40.259984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.023 [2024-10-01 13:52:40.263461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.023 [2024-10-01 13:52:40.263580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.023 [2024-10-01 13:52:40.263611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.024 [2024-10-01 13:52:40.263628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.263661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.263692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.263709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.263723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.263764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.269389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.269506] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.269538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.024 [2024-10-01 13:52:40.269555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.269588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.269619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.269655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.269671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.269703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.273808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.273937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.273968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.024 [2024-10-01 13:52:40.273986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.274573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.274761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.274797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.274815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.274939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.279563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.279681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.279712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.024 [2024-10-01 13:52:40.279729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.279762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.279793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.279810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.279824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.279855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.284247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.284365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.284396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.024 [2024-10-01 13:52:40.284414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.284447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.284478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.284495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.284509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.284540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.289658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.289775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.289807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.024 [2024-10-01 13:52:40.289825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.289858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.289889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.289906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.289940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.289974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.294489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.294616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.294648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.024 [2024-10-01 13:52:40.294665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.294698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.294730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.294746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.294760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.294797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.299755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.299883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.299931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.024 [2024-10-01 13:52:40.299953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.299987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.300018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.300036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.300050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.300081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.304675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.304821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.304854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.024 [2024-10-01 13:52:40.304872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.304957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.304991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.305008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.305023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.305055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.309849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.309997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.310030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.024 [2024-10-01 13:52:40.310048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.310081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.310113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.310130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.310146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.310177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.314773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.314903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.314955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.024 [2024-10-01 13:52:40.314974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.315009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.315041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.315058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.315077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.315109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.319970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.320093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.320125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.024 [2024-10-01 13:52:40.320143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.320176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.320208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.320225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.320265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.320299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.324870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.324997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.325029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.024 [2024-10-01 13:52:40.325047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.325079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.325111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.325128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.325142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.325172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.024 [2024-10-01 13:52:40.330066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.024 [2024-10-01 13:52:40.330181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.024 [2024-10-01 13:52:40.330213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.024 [2024-10-01 13:52:40.330232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.024 [2024-10-01 13:52:40.330264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.024 [2024-10-01 13:52:40.330295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.024 [2024-10-01 13:52:40.330312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.024 [2024-10-01 13:52:40.330327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.024 [2024-10-01 13:52:40.330357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.334998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.335129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.335161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.335180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.335213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.335244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.335261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.335276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.335307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.340158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.340300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.340332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.025 [2024-10-01 13:52:40.340350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.340382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.340413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.340430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.340444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.341676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.345090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.345203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.345234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.345251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.345284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.345314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.345332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.345346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.345376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.350276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.350387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.350419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.025 [2024-10-01 13:52:40.350436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.350469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.350499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.350517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.350531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.350576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.355184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.355298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.355329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.355347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.355380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.355430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.355449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.355462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.356679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.360363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.360474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.360506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.025 [2024-10-01 13:52:40.360523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.361113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.361315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.361353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.361372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.361481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.365274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.365386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.365418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.365436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.365468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.365498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.365516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.365530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.365561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.372413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.372860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.372907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.025 [2024-10-01 13:52:40.372944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.373017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.373057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.373076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.373091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.373155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.375362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.375475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.375506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.375524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.376138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.376329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.376370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.376388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.376500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.383875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.384067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.384102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.025 [2024-10-01 13:52:40.384121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.384154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.384186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.384204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.384220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.384251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.385452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.385561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.385592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.385609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.386864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.387687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.387728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.387748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.388104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.394366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.394508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.394558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.025 [2024-10-01 13:52:40.394612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.395227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.395419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.395455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.395474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.395606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.395675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.395765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.395806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.395826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.395860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.395891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.395908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.395939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.397176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.404865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.404998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.405030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.025 [2024-10-01 13:52:40.405048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.405081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.405112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.025 [2024-10-01 13:52:40.405129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.025 [2024-10-01 13:52:40.405143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.025 [2024-10-01 13:52:40.405174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.025 [2024-10-01 13:52:40.405745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.025 [2024-10-01 13:52:40.406402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.025 [2024-10-01 13:52:40.406446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.025 [2024-10-01 13:52:40.406474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.025 [2024-10-01 13:52:40.406645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.025 [2024-10-01 13:52:40.406771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.406823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.406842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.406885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.415095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.415214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.415246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.415264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.415297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.415327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.415344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.415359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.415390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.417694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.418041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.418084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.026 [2024-10-01 13:52:40.418105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.418174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.418213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.418231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.418245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.418276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.425203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.425317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.425348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.425366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.425413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.425449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.425466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.425480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.425511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.428844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.428974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.429006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.026 [2024-10-01 13:52:40.429024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.429057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.429087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.429104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.429117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.429148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.435294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.435406] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.435437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.435455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.435488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.435519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.435536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.435550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.436137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.439147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.439266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.439297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.026 [2024-10-01 13:52:40.439315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.439891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.440102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.440138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.440156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.440265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.445387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.445501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.445532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.445550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.446797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.447596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.447637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.447657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.447993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.449650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.449761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.449793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.026 [2024-10-01 13:52:40.449811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.449844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.449875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.449892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.449906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.449955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.455489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.455603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.455633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.455651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.456876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.457115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.457154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.457172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.458072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.459946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.460058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.460100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.026 [2024-10-01 13:52:40.460120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.460152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.460183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.460200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.460232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.460266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.466125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.466365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.466407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.466427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.466547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.466599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.466620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.466635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.466666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.470039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.470155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.470196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.026 [2024-10-01 13:52:40.470216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.470249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.470280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.470296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.470310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.470342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.477386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.477721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.477765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.477785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.477855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.477893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.477926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.477944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.477976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.480134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.480260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.480293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.026 [2024-10-01 13:52:40.480311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.480343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.480927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.480964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.480983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.481184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.488440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.488558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.026 [2024-10-01 13:52:40.488589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.026 [2024-10-01 13:52:40.488607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.026 [2024-10-01 13:52:40.488639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.026 [2024-10-01 13:52:40.488670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.026 [2024-10-01 13:52:40.488686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.026 [2024-10-01 13:52:40.488700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.026 [2024-10-01 13:52:40.488731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.026 [2024-10-01 13:52:40.492216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.026 [2024-10-01 13:52:40.492550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.492594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.492615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.492684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.492722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.492740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.492754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.492785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.498697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.498810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.498842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.027 [2024-10-01 13:52:40.498859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.499450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.499656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.499693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.499712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.499826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.503342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.503458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.503490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.503507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.503539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.503570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.503587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.503601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.503631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.509189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.509307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.509346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.027 [2024-10-01 13:52:40.509364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.509397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.509428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.509445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.509460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.509490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.513652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.513768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.513799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.513817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.514419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.514625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.514662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.514681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.514818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.519483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.519613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.519645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.027 [2024-10-01 13:52:40.519664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.519697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.519728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.519746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.519760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.519791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.524264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.524381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.524413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.524431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.524464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.524495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.524512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.524527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.524557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.529665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.529777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.529809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.027 [2024-10-01 13:52:40.529828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.529861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.529892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.529924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.529943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.529975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.534492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.534616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.534648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.534690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.534726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.534757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.534774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.534788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.534819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.539756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.539870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.539902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.027 [2024-10-01 13:52:40.539938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.539973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.540004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.540021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.540036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.540066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.544597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.544717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.544749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.544771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.544819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.544854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.544871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.544886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.544933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.549848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.549976] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.550017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.027 [2024-10-01 13:52:40.550035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.551270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.552048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.552105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.552125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.552445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.554694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.554812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.554843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.554861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.554893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.554942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.554962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.554976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.555009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.559954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.560068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.560099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.027 [2024-10-01 13:52:40.560117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.561332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.027 [2024-10-01 13:52:40.561580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.027 [2024-10-01 13:52:40.561618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.027 [2024-10-01 13:52:40.561636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.027 [2024-10-01 13:52:40.562547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.027 [2024-10-01 13:52:40.564783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.027 [2024-10-01 13:52:40.566085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.027 [2024-10-01 13:52:40.566130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.027 [2024-10-01 13:52:40.566150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.027 [2024-10-01 13:52:40.566937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.567283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.567322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.567341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.567413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.570674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.570795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.570827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.570845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.570877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.570908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.570945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.570959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.570991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.574868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.574995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.575027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.028 [2024-10-01 13:52:40.575044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.576257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.576488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.576524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.576542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.577443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.581756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.582109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.582153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.582173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.582244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.582282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.582300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.582314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.582349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.585607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.585727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.585759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.028 [2024-10-01 13:52:40.585777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.585831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.585869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.585887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.585901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.585948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.592968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.593089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.593121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.593148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.593181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.593212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.593229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.593243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.593274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.596781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.597135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.597179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.028 [2024-10-01 13:52:40.597200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.597270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.597309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.597327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.597341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.597373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.603376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.603495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.603526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.603544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.604151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.604340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.604377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.604427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.604554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.608102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.608219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.608250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.028 [2024-10-01 13:52:40.608267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.608300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.608331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.608348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.608363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.608394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.613961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.614082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.614115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.614133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.614166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.614198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.614227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.614241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.614272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.618465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.618594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.618626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.028 [2024-10-01 13:52:40.618644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.619259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.619449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.619486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.619504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.619616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.624268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.624429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.624461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.624479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.624525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.624558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.624575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.624589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.624620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.629034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.629149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.629180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.028 [2024-10-01 13:52:40.629198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.629230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.629261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.629278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.629292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.629322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.634396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.634525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.634572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.634592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.634625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.634671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.634693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.634708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.634738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.639251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.639369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.639399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.028 [2024-10-01 13:52:40.639417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.639450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.639507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.639526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.639541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.639572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.644486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.644602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.028 [2024-10-01 13:52:40.644633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.028 [2024-10-01 13:52:40.644651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.028 [2024-10-01 13:52:40.644684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.028 [2024-10-01 13:52:40.644715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.028 [2024-10-01 13:52:40.644732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.028 [2024-10-01 13:52:40.644746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.028 [2024-10-01 13:52:40.644776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.028 [2024-10-01 13:52:40.649365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.028 [2024-10-01 13:52:40.649482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.649514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.649532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.649578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.649638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.649676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.649708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.649763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.654811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.654956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.654989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.655008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.655042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.655074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.655091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.655106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.655167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.659580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.659704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.659735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.659753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.659786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.659817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.659834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.659847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.659878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.664992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.665117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.665149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.665167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.665216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.665251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.665269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.665283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.665314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.669813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.669943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.669974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.669993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.670026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.670057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.670074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.670089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.670120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.675087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.675202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.675233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.675285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.675320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.675900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.675952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.675972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.676139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.679905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.680039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.680071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.680089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.680121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.680153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.680170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.680184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.680215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.685182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.686500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.686596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.686618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.687615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.687757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.687783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.687798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.687831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.690015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.690126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.690158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.690176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.690208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.690240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.690281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.690297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.690895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.695280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.695395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.695426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.695444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.696685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.696946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.696982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.697001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.697898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.700106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.700229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.700270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.700290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.701539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.702336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.702375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.702395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.702737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.705366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.706053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.706094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.706114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.706280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.706408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.706429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.706444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.706483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.710198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.710317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.710349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.710367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.710400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.710431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.710449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.710463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.711721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.717390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.717741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.717785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.717807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.717879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.717941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.717968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.717986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.718018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.720297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.720407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.720438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.029 [2024-10-01 13:52:40.720456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.721055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.029 [2024-10-01 13:52:40.721262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.029 [2024-10-01 13:52:40.721299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.029 [2024-10-01 13:52:40.721317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.029 [2024-10-01 13:52:40.721436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.029 [2024-10-01 13:52:40.728681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.029 [2024-10-01 13:52:40.728837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.029 [2024-10-01 13:52:40.728870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.029 [2024-10-01 13:52:40.728889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.029 [2024-10-01 13:52:40.728973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.729008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.729026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.729041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.729073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.732365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.732781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.732827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.030 [2024-10-01 13:52:40.732848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.732936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.732977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.732996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.733011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.733043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.739142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.739880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.739939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.030 [2024-10-01 13:52:40.739963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.740137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.740258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.740280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.740297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.740338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.743857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.743998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.744031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.030 [2024-10-01 13:52:40.744050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.744083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.744114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.744131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.744174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.744209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.749712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.749831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.749862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.030 [2024-10-01 13:52:40.749880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.749932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.749968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.749986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.750000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.750031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.754195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.754311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.754343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.030 [2024-10-01 13:52:40.754362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.754971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.755171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.755208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.755224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.755333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.760012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.760136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.760169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.030 [2024-10-01 13:52:40.760187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.760220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.760251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.760269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.760284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.760315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.764710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.764853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.764884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.030 [2024-10-01 13:52:40.764902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.764951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.764983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.765000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.765014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.765077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.770104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.770217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.770248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.030 [2024-10-01 13:52:40.770266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.770299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.770330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.770347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.770361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.770391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.774908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.775039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.775076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.030 [2024-10-01 13:52:40.775094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.775138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.775171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.775188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.775202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.775233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.780193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.780307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.780338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.030 [2024-10-01 13:52:40.780356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.780388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.780439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.780458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.780473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.780503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.785029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.785144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.785176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.030 [2024-10-01 13:52:40.785193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.785225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.785272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.785293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.785308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.785339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.790283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.790397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.790428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.030 [2024-10-01 13:52:40.790446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.791673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.792459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.792499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.792519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.792838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.795120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.795233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.795264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.030 [2024-10-01 13:52:40.795282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.030 [2024-10-01 13:52:40.795315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.030 [2024-10-01 13:52:40.795346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.030 [2024-10-01 13:52:40.795362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.030 [2024-10-01 13:52:40.795376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.030 [2024-10-01 13:52:40.795425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.030 [2024-10-01 13:52:40.800375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.030 [2024-10-01 13:52:40.800493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.030 [2024-10-01 13:52:40.800525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.030 [2024-10-01 13:52:40.800543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.801767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.802034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.802065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.802081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.803003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.805208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.805320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.805352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.805370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.806617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.807433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.807472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.807491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.807812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.811270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.811399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.811431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.811450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.811483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.811515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.811533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.811547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.811578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.815297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.815421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.815453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.815503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.815538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.815570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.815588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.815602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.816862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.822767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.822983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.823019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.823038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.823075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.823127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.823150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.823167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.823200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.825396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.825508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.825539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.825558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.826170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.826361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.826397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.826416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.826529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.833881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.834031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.834064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.834082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.834116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.834147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.834192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.834210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.834243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.835483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.835595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.835627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.835645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.836879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.837688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.837728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.837748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.838084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.844174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.844291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.844322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.844340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.844926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.845119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.845155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.845174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.845295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.845575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.845678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.845710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.845727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.846967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.847206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.847243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.847261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.848170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.854703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.854825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.854858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.854875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.854909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.854961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.854978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.854992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.855022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.856359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.856478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.856508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.856526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.856559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.856590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.856607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.856621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.856651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.864851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.864988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.865021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.865039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.865073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.865104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.865122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.865136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.865167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.869312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 8126.67 IOPS, 31.74 MiB/s [2024-10-01 13:52:40.870182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.870228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.870280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.870388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.870429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.870446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.870461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.871076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.875008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.875125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.875158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.875176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.875224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.875271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.875291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.875306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.875336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.879422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.879759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.879803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.031 [2024-10-01 13:52:40.879824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.879998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.880115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.031 [2024-10-01 13:52:40.880135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.031 [2024-10-01 13:52:40.880150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.031 [2024-10-01 13:52:40.880190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.031 [2024-10-01 13:52:40.885100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.031 [2024-10-01 13:52:40.885220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.031 [2024-10-01 13:52:40.885252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.031 [2024-10-01 13:52:40.885270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.031 [2024-10-01 13:52:40.885302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.031 [2024-10-01 13:52:40.885344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.885394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.885411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.885443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.890014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.890140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.890172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.890191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.890225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.890261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.890278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.890293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.890323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.895196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.895312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.895344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.895361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.895394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.895439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.895459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.895474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.896701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.900113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.900227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.900259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.900277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.900319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.900350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.900367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.900382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.900412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.905286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.905435] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.905466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.905484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.905528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.905571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.905591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.905605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.906862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.910207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.910320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.910351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.910369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.910401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.910432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.910449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.910464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.911705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.915411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.915528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.915560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.915578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.916177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.916374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.916411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.916430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.916540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.920296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.920417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.920448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.920466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.920517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.920550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.920567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.920581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.920612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.928271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.928393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.928426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.928444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.928477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.928508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.928525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.928539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.928570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.930400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.930511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.930555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.930575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.931165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.931349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.931395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.931413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.931541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.938650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.938765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.938798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.938816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.938848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.938878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.938895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.938962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.938999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.942408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.942755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.942803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.942833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.942905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.942961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.942979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.942994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.943025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.949044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.949162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.949194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.949211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.949792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.949994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.950030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.950049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.950158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.953709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.953823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.953854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.953871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.953904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.953955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.953974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.953989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.954019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.959649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.959763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.959809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.959828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.959862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.959893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.959925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.959943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.959976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.964107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.964222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.964254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.032 [2024-10-01 13:52:40.964271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.964842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.965046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.965083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.965101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.965210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.969966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.970098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.032 [2024-10-01 13:52:40.970130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.032 [2024-10-01 13:52:40.970148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.032 [2024-10-01 13:52:40.970182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.032 [2024-10-01 13:52:40.970214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.032 [2024-10-01 13:52:40.970231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.032 [2024-10-01 13:52:40.970246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.032 [2024-10-01 13:52:40.970277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.032 [2024-10-01 13:52:40.974760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.032 [2024-10-01 13:52:40.974889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:40.974936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:40.974957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:40.974992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:40.975058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:40.975089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:40.975104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:40.975135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:40.980125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:40.980243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:40.980275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.033 [2024-10-01 13:52:40.980294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:40.980327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:40.980359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:40.980382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:40.980396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:40.980426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:40.984858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:40.984990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:40.985023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:40.985041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:40.985074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:40.985105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:40.985122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:40.985136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:40.985166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:40.990215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:40.990328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:40.990360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.033 [2024-10-01 13:52:40.990377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:40.990979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:40.991189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:40.991229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:40.991247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:40.991356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:40.994962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:40.995078] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:40.995113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:40.995130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:40.995178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:40.995212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:40.995230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:40.995244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:40.996457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.001868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.002216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.002260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.033 [2024-10-01 13:52:41.002281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.002352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.002391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.002410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.002424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.002455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.005601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.005720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.005751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:41.005769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.005802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.005833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.005850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.005864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.005894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.012803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.012932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.012964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.033 [2024-10-01 13:52:41.013008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.013044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.013076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.013093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.013107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.013138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.016692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.016806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.016837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:41.016855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.016904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.016967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.016986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.017000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.017031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.022901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.023572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.023616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.033 [2024-10-01 13:52:41.023637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.023803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.023943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.023966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.023981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.024021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.027412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.027527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.027559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:41.027577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.027610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.027641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.027678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.027693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.027726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.033149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.033280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.033312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.033 [2024-10-01 13:52:41.033330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.033371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.033403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.033420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.033435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.033466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.037537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.038230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.038275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:41.038297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.038466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.038603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.038628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.038649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.038709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.043254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.043372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.043405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.033 [2024-10-01 13:52:41.043423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.043455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.043486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.043504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.043519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.043550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.047870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.048048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.048080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.033 [2024-10-01 13:52:41.048098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.033 [2024-10-01 13:52:41.048132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.033 [2024-10-01 13:52:41.048162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.033 [2024-10-01 13:52:41.048180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.033 [2024-10-01 13:52:41.048194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.033 [2024-10-01 13:52:41.048225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.033 [2024-10-01 13:52:41.053348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.033 [2024-10-01 13:52:41.053462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.033 [2024-10-01 13:52:41.053494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.053511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.053544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.053575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.053593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.053607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.053638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.058010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.058124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.058156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.058173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.058205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.058237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.058253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.058268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.058298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.064109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.064240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.064272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.064289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.064339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.064372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.064389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.064404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.064434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.068104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.068218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.068250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.068268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.069483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.069734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.069773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.069792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.070708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.075200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.075317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.075348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.075366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.075400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.075430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.075448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.075463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.075493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.078605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.078730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.078760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.078778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.078811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.078842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.078859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.078891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.078942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.085736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.085855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.085887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.085905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.085959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.085991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.086008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.086023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.086053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.089631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.089746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.089779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.089797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.089829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.089860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.089878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.089892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.089942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.095828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.096501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.096546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.096566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.096730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.096845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.096872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.096889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.096943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.100298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.100410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.100458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.100477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.100510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.100541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.100559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.100573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.100603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.105942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.106056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.106088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.106105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.106138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.106168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.106186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.106200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.106230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.111132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.111252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.111283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.111300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.111333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.111364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.111381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.111395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.111441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.116036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.116150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.116181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.116199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.116232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.116282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.116300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.116314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.116344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.121223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.121338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.121370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.121392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.121977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.122185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.122229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.122247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.122356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.126126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.126239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.126270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.126288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.127513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.127748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.127777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.127792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.128692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.133164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.133316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.133349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.034 [2024-10-01 13:52:41.133366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.133400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.133431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.133448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.133462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.133511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.136624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.136747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.136779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.034 [2024-10-01 13:52:41.136796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.034 [2024-10-01 13:52:41.136829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.034 [2024-10-01 13:52:41.136860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.034 [2024-10-01 13:52:41.136877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.034 [2024-10-01 13:52:41.136891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.034 [2024-10-01 13:52:41.136937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.034 [2024-10-01 13:52:41.143906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.034 [2024-10-01 13:52:41.144050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.034 [2024-10-01 13:52:41.144082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.144107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.144141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.144172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.144190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.144205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.144236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.147823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.148014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.148054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.148072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.148106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.148139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.148157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.148171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.148202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.154082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.154800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.154846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.154901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.155097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.155216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.155238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.155253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.155294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.158705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.158820] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.158852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.158869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.158902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.158954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.158973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.158988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.159018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.164424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.164573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.164605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.164623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.164657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.164689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.164706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.164721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.164753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.168889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.169028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.169059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.169078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.169727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.169987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.170044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.170061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.170217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.174664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.174786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.174818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.174836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.174868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.174900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.174935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.174951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.174984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.179312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.179429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.179460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.179478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.179510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.179541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.179558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.179572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.179602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.184763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.184888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.184933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.184954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.184987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.185018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.185035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.185049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.185081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.189406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.189544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.189576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.189594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.189625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.189656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.189673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.189687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.189717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.195524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.195646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.195678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.195696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.195743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.195777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.195794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.195808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.195838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.199519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.199633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.199664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.199681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.200900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.201142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.201188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.201206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.202101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.206596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.206711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.206742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.206759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.206815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.206847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.206864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.206878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.206927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.210005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.210125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.210156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.210173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.210206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.210237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.210254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.210268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.210297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.217226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.217339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.217370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.217388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.217420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.217450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.217466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.217480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.217511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.221104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.221218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.221249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.221268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.221315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.221349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.221367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.221399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.221432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.227321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.227989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.228032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.035 [2024-10-01 13:52:41.228053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.228217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.035 [2024-10-01 13:52:41.228331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.035 [2024-10-01 13:52:41.228351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.035 [2024-10-01 13:52:41.228366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.035 [2024-10-01 13:52:41.228404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.035 [2024-10-01 13:52:41.231759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.035 [2024-10-01 13:52:41.231873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.035 [2024-10-01 13:52:41.231904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.035 [2024-10-01 13:52:41.231940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.035 [2024-10-01 13:52:41.231974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.232005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.232022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.232037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.232067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.237416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.237530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.237560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.237577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.237610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.237641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.237658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.237672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.237703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.242571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.242695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.242743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.242763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.242796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.242846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.242867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.242882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.242931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.247509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.247625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.247657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.247675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.247708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.247739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.247756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.247770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.247800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.252665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.252780] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.252811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.252829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.253418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.253602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.253629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.253645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.253752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.257602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.257715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.257746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.257764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.259007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.259263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.259307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.259325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.260230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.264594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.264766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.264799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.264817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.264850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.264881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.264898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.264929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.264964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.268028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.268151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.268182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.268201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.268233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.268264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.268281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.268295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.268325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.275260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.275382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.275414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.275432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.275464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.275496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.275512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.275527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.275577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.279175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.279328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.279377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.279394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.279429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.279460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.279477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.279492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.279523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.285419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.285533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.285565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.285583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.286169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.286354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.286382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.286397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.286503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.289891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.290028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.290060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.290078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.290110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.290141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.290158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.290172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.290202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.295602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.295717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.295748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.295782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.295830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.295864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.295882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.295896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.295944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.299998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.300649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.300693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.300713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.300877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.301007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.301036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.301051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.301090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.305697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.305810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.305842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.305859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.305891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.305937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.305960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.305975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.306005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.310155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.310270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.310301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.036 [2024-10-01 13:52:41.310318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.310350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.310381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.310415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.310431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.310463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.315791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.315906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.315951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.036 [2024-10-01 13:52:41.315970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.036 [2024-10-01 13:52:41.316003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.036 [2024-10-01 13:52:41.317221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.036 [2024-10-01 13:52:41.317260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.036 [2024-10-01 13:52:41.317278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.036 [2024-10-01 13:52:41.317508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.036 [2024-10-01 13:52:41.320243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.036 [2024-10-01 13:52:41.320356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.036 [2024-10-01 13:52:41.320387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.037 [2024-10-01 13:52:41.320405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.320437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.320468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.320485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.320499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.320529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.326478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.326625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.326658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.037 [2024-10-01 13:52:41.326676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.326709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.326740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.326757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.326772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.326802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.330330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.330475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.330508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.037 [2024-10-01 13:52:41.330526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.330572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.330605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.330621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.330635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.330672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.337720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.338129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.338176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.037 [2024-10-01 13:52:41.338197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.338270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.338329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.338352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.338368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.338400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.340449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.340560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.340592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.037 [2024-10-01 13:52:41.340609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.340642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.340672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.340690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.340704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.340734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.349013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.349178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.349212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.037 [2024-10-01 13:52:41.349231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.349296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.349329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.349347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.349363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.349395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.350545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.351873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.351930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.037 [2024-10-01 13:52:41.351953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.352730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.353103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.353141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.353160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.353233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.359144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.359853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.359900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.037 [2024-10-01 13:52:41.359937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.360108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.360226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.360248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.360264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.360323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.362072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.363084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.363128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.037 [2024-10-01 13:52:41.363150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.363890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.364026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.364051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.364096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.364132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.369565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.369700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.369733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.037 [2024-10-01 13:52:41.369752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.037 [2024-10-01 13:52:41.369786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.037 [2024-10-01 13:52:41.369817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.037 [2024-10-01 13:52:41.369834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.037 [2024-10-01 13:52:41.369849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.037 [2024-10-01 13:52:41.369879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.037 [2024-10-01 13:52:41.372176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.037 [2024-10-01 13:52:41.372287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.037 [2024-10-01 13:52:41.372318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.038 [2024-10-01 13:52:41.372336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.373864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.374082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.374110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.374126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.374762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.379862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.380019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.380054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.038 [2024-10-01 13:52:41.380073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.380106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.380137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.380155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.380170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.380201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.382754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.382882] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.382960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.038 [2024-10-01 13:52:41.382982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.383017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.383049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.383066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.383081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.383112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.389986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.390126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.390159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.038 [2024-10-01 13:52:41.390183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.390231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.390266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.390284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.390299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.390330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.393590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.393704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.393736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.038 [2024-10-01 13:52:41.393754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.393787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.393818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.393834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.393849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.393880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.400096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.400244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.400277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.038 [2024-10-01 13:52:41.400296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.400890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.401123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.401160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.401180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.401293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.403818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.404537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.404584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.038 [2024-10-01 13:52:41.404606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.404778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.404899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.404935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.404952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.404994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.412517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.412749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.412786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.038 [2024-10-01 13:52:41.412806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.412843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.412876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.412894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.412926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.412964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.414413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.414525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.414571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.038 [2024-10-01 13:52:41.414590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.414623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.414655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.414672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.414688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.414752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.423433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.423572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.423604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.038 [2024-10-01 13:52:41.423622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.423656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.423688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.423706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.423721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.423751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.424588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.424698] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.424728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.038 [2024-10-01 13:52:41.424746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.424778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.424809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.424826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.424841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.424871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.433736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.434477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.434524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.038 [2024-10-01 13:52:41.434557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.434730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.434873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.434908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.434942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.434987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.435014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.435097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.435126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.038 [2024-10-01 13:52:41.435169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.038 [2024-10-01 13:52:41.435204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.038 [2024-10-01 13:52:41.435235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.038 [2024-10-01 13:52:41.435253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.038 [2024-10-01 13:52:41.435266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.038 [2024-10-01 13:52:41.436560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.038 [2024-10-01 13:52:41.444358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.038 [2024-10-01 13:52:41.444533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.038 [2024-10-01 13:52:41.444568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.039 [2024-10-01 13:52:41.444587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.444624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.444662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.444680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.444696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.444728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.445074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.445163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.445192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.039 [2024-10-01 13:52:41.445210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.445242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.445839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.445877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.445897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.446081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.454718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.454903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.454953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.039 [2024-10-01 13:52:41.454974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.455011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.455042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.455091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.455109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.455145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.455197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.455284] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.455314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.039 [2024-10-01 13:52:41.455332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.456575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.457383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.457422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.457441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.457768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.464963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.465111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.465147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.039 [2024-10-01 13:52:41.465167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.465203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.465239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.465258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.465274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.465315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.465350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.465440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.465469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.039 [2024-10-01 13:52:41.465486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.465518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.466791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.466821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.466836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.467091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.475079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.475274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.475315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.039 [2024-10-01 13:52:41.475334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.475370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.475407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.475426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.475441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.475483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.475518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.476192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.476237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.039 [2024-10-01 13:52:41.476257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.476442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.476586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.476614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.476631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.476671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.485241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.485399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.485433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.039 [2024-10-01 13:52:41.485452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.485487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.485523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.485541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.485557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.485588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.486858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.487717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.487762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.039 [2024-10-01 13:52:41.487784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.488153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.488246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.488271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.488287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.488339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.495408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.495546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.495579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.039 [2024-10-01 13:52:41.495598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.039 [2024-10-01 13:52:41.495631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.039 [2024-10-01 13:52:41.495666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.039 [2024-10-01 13:52:41.495686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.039 [2024-10-01 13:52:41.495701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.039 [2024-10-01 13:52:41.495739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.039 [2024-10-01 13:52:41.498369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.039 [2024-10-01 13:52:41.499216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.039 [2024-10-01 13:52:41.499261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.039 [2024-10-01 13:52:41.499282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.499385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.499423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.499443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.499457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.499490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.505512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.505641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.505672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.040 [2024-10-01 13:52:41.505690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.505724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.505755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.505773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.505813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.505846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.509436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.509553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.509584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.040 [2024-10-01 13:52:41.509602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.510203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.510417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.510453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.510472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.510596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.515610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.515726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.515758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.040 [2024-10-01 13:52:41.515776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.517019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.517790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.517829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.517849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.518182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.519947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.520061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.520109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.040 [2024-10-01 13:52:41.520130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.520163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.520194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.520212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.520226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.520256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.525701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.525816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.525887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.040 [2024-10-01 13:52:41.525923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.525961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.527206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.527248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.527267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.527511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.530257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.530391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.530424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.040 [2024-10-01 13:52:41.530443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.530489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.530523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.530553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.530570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.530602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.535798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.535967] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.536000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.040 [2024-10-01 13:52:41.536019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.536614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.536834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.536872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.536892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.537021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.540619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.540765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.540797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.040 [2024-10-01 13:52:41.540815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.540848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.540907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.540944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.540959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.540992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.545908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.546040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.546072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.040 [2024-10-01 13:52:41.546090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.546123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.546155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.546172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.546187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.547426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.550720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.550835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.550867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.040 [2024-10-01 13:52:41.550885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.550933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.550968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.550986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.551000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.551031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.556018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.556135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.556167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.040 [2024-10-01 13:52:41.556185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.556218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.556250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.556268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.040 [2024-10-01 13:52:41.556282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.040 [2024-10-01 13:52:41.556339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.040 [2024-10-01 13:52:41.560816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.040 [2024-10-01 13:52:41.560952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.040 [2024-10-01 13:52:41.560986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.040 [2024-10-01 13:52:41.561004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.040 [2024-10-01 13:52:41.561037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.040 [2024-10-01 13:52:41.561068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.040 [2024-10-01 13:52:41.561085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.561100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.561131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.566110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.566228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.566259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.041 [2024-10-01 13:52:41.566276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.566309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.566906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.566958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.566977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.567161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.570922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.571052] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.571084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.041 [2024-10-01 13:52:41.571102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.571135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.571166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.571183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.571197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.571228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.576205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.577514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.577560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.041 [2024-10-01 13:52:41.577609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.578377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.578732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.578773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.578791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.578863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.581013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.581124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.581156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.041 [2024-10-01 13:52:41.581173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.581205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.581236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.581252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.581266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.581848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.586301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.586418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.586449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.041 [2024-10-01 13:52:41.586467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.587711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.587982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.588020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.588039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.588948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.591105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.591219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.591251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.041 [2024-10-01 13:52:41.591269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.592494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.593303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.593371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.593391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.593712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.597163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.597289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.597320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.041 [2024-10-01 13:52:41.597337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.597384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.597419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.597437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.597452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.597482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.601199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.601313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.601344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.041 [2024-10-01 13:52:41.601362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.601394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.601425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.601441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.601456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.602688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.608329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.608665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.608709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.041 [2024-10-01 13:52:41.608730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.608800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.608838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.608857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.608871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.608902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.611291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.611413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.611444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.041 [2024-10-01 13:52:41.611462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.612049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.612232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.612269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.612287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.612396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.619544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.619668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.619699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.041 [2024-10-01 13:52:41.619717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.619750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.619781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.619798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.619812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.619842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.623371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.623712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.623756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.041 [2024-10-01 13:52:41.623777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.623848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.623886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.041 [2024-10-01 13:52:41.623904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.041 [2024-10-01 13:52:41.623934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.041 [2024-10-01 13:52:41.623969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.041 [2024-10-01 13:52:41.630038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.041 [2024-10-01 13:52:41.630166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.041 [2024-10-01 13:52:41.630198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.041 [2024-10-01 13:52:41.630217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.041 [2024-10-01 13:52:41.630856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.041 [2024-10-01 13:52:41.631080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.631118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.631137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.631277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.634825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.634970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.635004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.635023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.635057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.635088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.635105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.635119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.635169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.640897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.641076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.641111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.641129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.641165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.641198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.641228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.641243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.641276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.645479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.645605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.645637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.645655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.646273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.646464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.646500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.646557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.646674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.651346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.651475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.651507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.651524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.651557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.651589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.651607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.651621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.651651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.656160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.656277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.656308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.656326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.656359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.656390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.656408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.656422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.656453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.661694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.661827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.661859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.661878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.661927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.661963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.661981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.661996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.662027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.666621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.666753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.666809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.666830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.666865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.666897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.666931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.666949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.666982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.671796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.671946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.671978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.671997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.672031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.672063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.672080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.672095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.672125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.676992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.677132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.677165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.677183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.677217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.677249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.677266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.677281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.677312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.681940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.682064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.682096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.682115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.682148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.682215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.682234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.682249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.682279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.687098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.687233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.687266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.687284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.687317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.687349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.687366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.687381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.687411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.692266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.692394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.692427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.692445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.692492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.692527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.692546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.692560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.692600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.697322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.697445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.697477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.697495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.697528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.697560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.697577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.697591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.697663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.702366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.702484] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.702517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.702534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.702583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.702614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.702632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.702646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.702682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.707645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.707764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.707796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.707814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.707846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.707878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.707895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.707924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.707961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.712524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.712640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.712672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.042 [2024-10-01 13:52:41.712690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.712723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.712754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.042 [2024-10-01 13:52:41.712772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.042 [2024-10-01 13:52:41.712787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.042 [2024-10-01 13:52:41.712817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.042 [2024-10-01 13:52:41.717742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.042 [2024-10-01 13:52:41.717862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.042 [2024-10-01 13:52:41.717894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.042 [2024-10-01 13:52:41.717958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.042 [2024-10-01 13:52:41.717995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.042 [2024-10-01 13:52:41.718027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.718044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.718058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.718090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.722868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.723050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.723091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.723111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.723146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.723178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.723196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.723211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.723243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.727849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.727990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.728023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.728041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.728083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.728115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.728132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.728147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.728178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.732987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.733112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.733145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.733163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.733197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.733228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.733279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.733295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.733327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.738237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.738397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.738436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.738455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.738489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.738522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.738553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.738571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.738603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.743311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.743486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.743520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.743547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.743599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.743632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.743658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.743674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.743705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.748364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.748520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.748552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.748571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.748606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.748637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.748654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.748670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.748701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.753761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.753930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.753963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.753982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.754017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.754049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.754066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.754081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.754112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.758690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.758820] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.758852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.758871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.758905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.758954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.758973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.758988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.759029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.763883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.764025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.764058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.764077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.764110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.764142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.764159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.764174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.764205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.768972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.769122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.769154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.769173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.769239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.769272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.769290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.769305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.769336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.774003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.774130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.774162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.774181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.774214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.774246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.774263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.774277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.774308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.779080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.779203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.779234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.779252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.779285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.779317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.779334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.779350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.779381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.784194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.784321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.784353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.784371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.784405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.784436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.784454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.784498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.784532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.789175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.789302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.789335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.789353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.789386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.789418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.789435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.789449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.789481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.794298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.794439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.794472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.794490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.794524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.794571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.794591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.794606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.794637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.799511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.799670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.799704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.043 [2024-10-01 13:52:41.799722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.043 [2024-10-01 13:52:41.799757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.043 [2024-10-01 13:52:41.799790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.043 [2024-10-01 13:52:41.799807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.043 [2024-10-01 13:52:41.799822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.043 [2024-10-01 13:52:41.799854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.043 [2024-10-01 13:52:41.804614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.043 [2024-10-01 13:52:41.804755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.043 [2024-10-01 13:52:41.804819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.043 [2024-10-01 13:52:41.804857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.804893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.804946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.804967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.804982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.805013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.809629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.809749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.809781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.809799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.809832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.809863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.809880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.809895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.809943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.814860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.814997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.815030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.815048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.815081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.815113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.815130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.815145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.815175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.819780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.819908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.819954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.819972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.820011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.820073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.820092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.820107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.820138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.824971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.825098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.825130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.825148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.825181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.825212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.825230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.825245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.825277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.829993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.830119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.830152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.830170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.830204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.830236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.830253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.830268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.830300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.835070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.835197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.835230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.835248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.835281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.835312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.835329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.835344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.835404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.840091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.840212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.840245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.840263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.840297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.840329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.840346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.840361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.840391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.845219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.845364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.845396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.845414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.845447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.845478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.845496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.845511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.845542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.850192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.850311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.850343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.850361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.850394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.850428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.850445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.850460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.850490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.855335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.855452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.855484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.855535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.855570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.855602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.855620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.855634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.855665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.860474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.860598] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.860631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.860649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.860682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.860723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.860740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.860755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.860786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.865500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.865638] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.865671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.865689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.865722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.865759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.865777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.865791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.865822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 8161.23 IOPS, 31.88 MiB/s [2024-10-01 13:52:41.870963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.871081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.871113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.871131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.871164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.871196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.871241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.871257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.871289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.875887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.876040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.876083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.876101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.876134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.876165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.876182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.876197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.876227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.881060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.881195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.881228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.881246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.881289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.881320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.881337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.881352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.881383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.886002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.886126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.886159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.886177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.886211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.886242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.886264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.886279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.886312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.891242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.891391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.891424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.044 [2024-10-01 13:52:41.891442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.891476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.044 [2024-10-01 13:52:41.891508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.044 [2024-10-01 13:52:41.891525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.044 [2024-10-01 13:52:41.891548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.044 [2024-10-01 13:52:41.891580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.044 [2024-10-01 13:52:41.896250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.044 [2024-10-01 13:52:41.896377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.044 [2024-10-01 13:52:41.896410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.044 [2024-10-01 13:52:41.896429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.044 [2024-10-01 13:52:41.896463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.896493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.896511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.896525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.896557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.901353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.901472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.901504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.901522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.901556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.901587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.901603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.901618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.901649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.906450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.906574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.906607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.906626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.906693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.906726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.906743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.906761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.906792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.911448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.911592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.911623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.911649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.911681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.911735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.911752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.911771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.911801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.916539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.916663] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.916694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.916713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.916746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.916778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.916796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.916811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.916842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.921733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.921852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.921885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.921903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.921955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.921987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.922013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.922068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.922121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.926672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.926791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.926823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.926841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.926874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.926906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.926941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.926957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.926988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.931829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.931963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.931995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.932013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.932048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.932079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.932096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.932122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.932153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.936979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.937099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.937131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.937149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.937182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.937222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.937239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.937254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.937284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.941935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.942085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.942117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.942135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.942169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.942200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.942217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.942231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.942263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.947073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.947204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.947241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.947259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.947292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.947324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.947341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.947356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.947386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.952105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.952231] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.952264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.952283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.952332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.952367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.952385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.952400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.952431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.957174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.957296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.957329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.957346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.957380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.957443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.957463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.957477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.957508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.962202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.962325] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.962357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.962375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.962409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.962440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.962457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.962472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.962502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.967273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.967405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.967438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.967456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.967488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.967520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.967537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.967551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.967582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.972301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.972428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.972460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.045 [2024-10-01 13:52:41.972478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.972511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.972543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.045 [2024-10-01 13:52:41.972561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.045 [2024-10-01 13:52:41.972575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.045 [2024-10-01 13:52:41.972634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.045 [2024-10-01 13:52:41.977371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.045 [2024-10-01 13:52:41.977494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.045 [2024-10-01 13:52:41.977526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.045 [2024-10-01 13:52:41.977544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.045 [2024-10-01 13:52:41.977577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.045 [2024-10-01 13:52:41.977609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:41.977626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:41.977641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:41.977671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:41.982403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:41.982521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:41.982566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:41.982586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:41.982646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:41.982682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:41.982700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:41.982714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:41.982745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:41.987469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:41.987587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:41.987618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:41.987637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:41.987669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:41.987700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:41.987718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:41.987732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:41.987762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:41.992498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:41.992612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:41.992644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:41.992690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:41.992725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:41.992757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:41.992774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:41.992788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:41.992819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:41.997564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:41.997678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:41.997709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:41.997727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:41.997760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:41.997790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:41.997808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:41.997822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:41.997852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.002589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.002716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.002747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:42.002765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.002797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.002828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.002845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.002859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.002889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.007652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.007777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.007809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:42.007826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.007860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.007891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.007948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.007966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.007999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.012709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.012827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.012858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:42.012877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.012949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.012997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.013015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.013029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.013061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.017760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.017876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.017908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:42.017942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.017976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.018007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.018025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.018040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.018070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.022803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.022936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.022969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:42.022993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.023027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.023059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.023076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.023091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.023122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.028159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.028297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.028329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:42.028347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.028379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.028411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.028428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.028443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.028475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.033132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.033260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.033299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:42.033319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.033353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.033385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.033402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.033417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.033449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.038255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.038378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.038411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:42.038429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.038462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.038494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.038512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.038527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.038572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.043369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.043488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.043532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:42.043554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.043621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.043654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.043671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.043686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.043717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.048363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.048492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.048530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:42.048550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.048583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.048614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.048631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.048646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.048676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.053461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.053578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.053617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:42.053635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.053668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.053699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.053717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.053732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.053762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.058660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.058781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.058813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:42.058831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.058864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.058895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.058928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.058974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.059035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.063577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.063694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.063726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.046 [2024-10-01 13:52:42.063744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.063777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.063809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.063825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.046 [2024-10-01 13:52:42.063840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.046 [2024-10-01 13:52:42.063870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.046 [2024-10-01 13:52:42.068753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.046 [2024-10-01 13:52:42.068869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.046 [2024-10-01 13:52:42.068900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.046 [2024-10-01 13:52:42.068936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.046 [2024-10-01 13:52:42.068973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.046 [2024-10-01 13:52:42.069005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.046 [2024-10-01 13:52:42.069022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.069037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.069067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.073754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.073882] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.073929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.073950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.073984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.074016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.074034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.074048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.074079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.078844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.078997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.079030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.079048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.079080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.079111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.079128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.079143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.079174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.083862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.083993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.084030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.084048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.084081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.084111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.084128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.084142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.084173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.088968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.089083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.089115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.089133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.089166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.089196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.089213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.089228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.089258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.093969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.094085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.094117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.094134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.094185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.095423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.095463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.095483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.096256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.099061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.099185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.099217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.099234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.099810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.100023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.100060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.100079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.100187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.104064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.104184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.104216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.104234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.104267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.104298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.104315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.104339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.105552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.110348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.111227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.111282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.111303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.111616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.111713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.111740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.111755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.111818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.114160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.114270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.114300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.114318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.114927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.115128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.115158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.115174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.115281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.121689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.122501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.122556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.122578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.122675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.122713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.122731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.122745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.122781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.124249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.125550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.125595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.125616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.126385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.126744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.126783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.126801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.126873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.132938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.133050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.133098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.133119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.133693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.133879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.133932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.133950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.134057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.134337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.134438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.134470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.134488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.135711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.135978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.136016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.136043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.136953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.143468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.143582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.143613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.143631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.143664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.143695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.143712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.143726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.143756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.145140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.145259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.145290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.145308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.145341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.145390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.145410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.145424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.145455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.153557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.153672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.153703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.153721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.153754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.153785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.153802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.153816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.153846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.156267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.156381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.156412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.047 [2024-10-01 13:52:42.156437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.156470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.156500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.156517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.156531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.156562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.047 [2024-10-01 13:52:42.163651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.047 [2024-10-01 13:52:42.163777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.047 [2024-10-01 13:52:42.163809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.047 [2024-10-01 13:52:42.163827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.047 [2024-10-01 13:52:42.165074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.047 [2024-10-01 13:52:42.165303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.047 [2024-10-01 13:52:42.165340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.047 [2024-10-01 13:52:42.165359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.047 [2024-10-01 13:52:42.166268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.167163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.167276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.167308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.167325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.167358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.167388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.167405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.167420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.167450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.174195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.174319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.174350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.174368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.174409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.174440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.174457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.174471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.174519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.177707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.177828] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.177859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.177877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.177909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.177960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.177977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.177991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.178022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.185243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.185361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.185392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.185427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.185463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.185494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.185511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.185525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.185555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.187798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.188465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.188509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.188530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.188706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.188821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.188842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.188856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.188895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.195745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.195863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.195894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.195927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.195965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.195995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.196013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.196027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.196057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.199612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.199766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.199799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.199817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.199850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.199898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.199938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.199977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.200012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.205848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.206557] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.206606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.206628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.206798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.206930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.206953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.206968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.207008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.210359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.210474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.210504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.210522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.210568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.210601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.210618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.210633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.210663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.216030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.216145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.216176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.216194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.216226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.216257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.216274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.216289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.216319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.220449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.221161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.221211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.221233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.221400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.221527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.221560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.221578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.221620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.226130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.226244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.226275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.226293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.226326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.226356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.226374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.226388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.226417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.230783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.230897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.230943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.230962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.230995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.231025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.231043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.231057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.231087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.236220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.236337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.236369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.236387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.236453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.236490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.236508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.236522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.236553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.240974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.241089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.241121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.241138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.241171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.241202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.241219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.241233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.241263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.246315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.246427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.246459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.048 [2024-10-01 13:52:42.246477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.246510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.246555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.246575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.048 [2024-10-01 13:52:42.246589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.048 [2024-10-01 13:52:42.247180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.048 [2024-10-01 13:52:42.251069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.048 [2024-10-01 13:52:42.251183] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.048 [2024-10-01 13:52:42.251214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.048 [2024-10-01 13:52:42.251232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.048 [2024-10-01 13:52:42.251264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.048 [2024-10-01 13:52:42.251295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.048 [2024-10-01 13:52:42.251312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.251326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.251369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.258638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.258790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.258823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.258841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.258874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.258905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.258938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.258954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.258986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.261160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.261268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.261299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.261316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.261349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.261945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.261983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.262002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.262193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.269459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.269576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.269607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.269625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.269658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.269689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.269707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.269721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.269751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.273449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.273604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.273658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.273678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.273713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.273744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.273761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.273775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.273807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.279721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.279838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.279869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.279887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.280483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.280672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.280709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.280727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.280835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.284322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.284438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.284469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.284487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.284520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.284551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.284568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.284582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.284614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.290053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.290170] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.290201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.290219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.290252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.290303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.290323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.290338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.290369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.294417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.294530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.294574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.294593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.295180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.295366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.295402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.295420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.295529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.300142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.300255] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.300287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.300305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.300337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.300367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.300384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.300399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.300429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.304616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.304730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.304761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.304779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.304811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.304842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.304859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.304873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.304904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.310230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.310344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.310376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.310394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.310427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.311648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.311688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.311708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.311933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.314705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.314818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.314849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.314867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.314899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.314948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.314966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.314980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.315010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.320821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.320960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.320993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.321011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.321044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.321092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.321114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.321129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.321160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.324797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.324925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.324957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.324992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.326207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.326457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.326485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.326501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.327411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.331869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.331996] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.332029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.332046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.332079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.332110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.332127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.332142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.332172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.335306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.335427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.335458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.335476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.335508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.335539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.335555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.335570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.335600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.342446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.342571] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.342604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.049 [2024-10-01 13:52:42.342622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.342656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.342686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.342703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.342735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.342768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.049 [2024-10-01 13:52:42.346331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.049 [2024-10-01 13:52:42.346447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.049 [2024-10-01 13:52:42.346478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.049 [2024-10-01 13:52:42.346496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.049 [2024-10-01 13:52:42.346529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.049 [2024-10-01 13:52:42.346576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.049 [2024-10-01 13:52:42.346605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.049 [2024-10-01 13:52:42.346619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.049 [2024-10-01 13:52:42.346650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.353227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.353422] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.353453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.353471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.353512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.353546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.353564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.353578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.353620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.356981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.357095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.357126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.357144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.357176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.357218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.357235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.357250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.357281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.363319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.363456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.363488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.363505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.364101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.364300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.364338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.364356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.364466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.367613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.367858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.367892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.367937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.368052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.368095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.368113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.368128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.368159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.375420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.375575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.375608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.375626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.375659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.375690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.375708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.375722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.375753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.377708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.377817] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.377848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.377866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.377936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.377972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.377990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.378004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.378035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.386041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.386164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.386195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.386213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.386246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.386277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.386295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.386309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.386340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.389853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.390017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.390050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.390068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.390102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.390134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.390151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.390165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.390196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.396809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.397021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.397054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.397073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.397114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.397148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.397166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.397204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.397239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.400596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.400716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.400747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.400764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.400796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.400827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.400844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.400858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.400889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.406903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.407030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.407060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.407078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.407651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.407838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.407875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.407893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.408019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.411436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.411558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.411589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.411606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.411639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.411670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.411687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.411701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.411731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.418983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.419136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.419192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.419213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.419247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.419279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.419297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.419311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.419342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.421531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.422195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.422239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.422259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.422436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.422567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.422599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.422617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.422667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.429551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.429668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.429700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.429717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.429750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.429781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.429797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.429811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.429841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.433371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.433525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.433557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.050 [2024-10-01 13:52:42.433575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.433608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.433662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.433681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.050 [2024-10-01 13:52:42.433695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.050 [2024-10-01 13:52:42.433726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.050 [2024-10-01 13:52:42.440373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.050 [2024-10-01 13:52:42.440499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.050 [2024-10-01 13:52:42.440531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.050 [2024-10-01 13:52:42.440549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.050 [2024-10-01 13:52:42.440582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.050 [2024-10-01 13:52:42.440613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.050 [2024-10-01 13:52:42.440630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.440645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.440675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.444001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.444115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.444146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.444164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.444197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.444228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.444245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.444259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.444290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.451155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.451349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.451380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.451399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.451439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.451474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.451491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.451505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.451536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.454755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.454877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.454909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.454943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.454977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.455008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.455025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.455042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.455073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.462256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.462370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.462401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.462419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.462451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.462481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.462498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.462513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.462555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.465588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.465707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.465737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.465754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.465786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.465816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.465833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.465847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.465878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.472687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.472801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.472833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.472868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.472903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.472952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.472971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.472985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.473016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.476530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.476645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.476676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.476694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.476726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.476757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.476773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.476788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.476819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.483435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.483558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.483589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.483608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.483641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.483671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.483688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.483702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.483733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.487060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.487173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.487204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.487221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.487254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.487285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.487317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.487332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.487364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.494199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.494392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.494424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.494442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.494482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.494516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.494533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.494564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.494595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.497851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.497988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.498019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.498037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.498069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.498100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.498117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.498132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.498162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.505330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.505445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.505477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.505494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.505539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.505569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.505586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.505600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.505630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.508654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.508861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.508894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.508925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.508970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.509004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.509022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.509035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.509066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.515872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.516001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.516033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.516051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.516084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.516115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.516132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.516146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.516176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.519685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.519836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.519868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.519886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.519934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.519970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.519986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.520001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.520032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.526719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.526840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.526871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.051 [2024-10-01 13:52:42.526889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.526965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.526998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.527015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.527031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.527062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.530379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.530502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.051 [2024-10-01 13:52:42.530533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.051 [2024-10-01 13:52:42.530568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.051 [2024-10-01 13:52:42.530601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.051 [2024-10-01 13:52:42.530632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.051 [2024-10-01 13:52:42.530648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.051 [2024-10-01 13:52:42.530662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.051 [2024-10-01 13:52:42.530692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.051 [2024-10-01 13:52:42.537552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.051 [2024-10-01 13:52:42.537675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.537707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.537724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.537756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.537787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.537805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.537819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.537848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.541083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.541204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.541236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.541253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.541297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.541327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.541344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.541374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.541410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.548545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.548661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.548693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.548710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.548743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.548774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.548792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.548806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.548836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.551950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.552070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.552101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.552119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.552151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.552181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.552198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.552212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.552243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.559078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.559192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.559223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.559241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.559273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.559307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.559325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.559339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.559368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.562876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.563002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.563050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.563071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.563105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.563136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.563153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.563167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.563199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.569860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.569998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.570030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.570048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.570080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.570110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.570128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.570142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.570177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.573491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.573603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.573635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.573653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.573685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.573716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.573732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.573746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.573776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.580518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.580771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.580819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.580840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.580973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.581037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.581058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.581073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.581104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.584246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.584373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.584405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.584422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.584455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.584485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.584502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.584516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.584547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.591744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.591921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.591955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.591974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.592008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.592039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.592055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.592070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.592101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.594351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.595023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.595066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.595087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.595272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.595388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.595409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.595423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.595462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.602408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.602523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.602568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.602588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.602621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.602652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.602669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.602683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.602713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.606187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.606338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.606370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.606388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.606421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.606452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.606469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.606483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.606514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.613242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.613365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.613396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.613415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.613448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.613479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.613496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.613526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.613557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.616896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.617033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.617065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.617106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.617141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.617172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.617190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.617204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.617235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.623340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.624018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.624064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.052 [2024-10-01 13:52:42.624086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.624254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.624369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.052 [2024-10-01 13:52:42.624391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.052 [2024-10-01 13:52:42.624405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.052 [2024-10-01 13:52:42.624445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.052 [2024-10-01 13:52:42.627649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.052 [2024-10-01 13:52:42.627771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.052 [2024-10-01 13:52:42.627802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.052 [2024-10-01 13:52:42.627820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.052 [2024-10-01 13:52:42.627853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.052 [2024-10-01 13:52:42.627884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.627901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.627930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.627964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.635110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.635264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.635297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.635316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.635350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.635381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.635417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.635433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.635471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.638520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.638651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.638682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.638699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.638732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.638768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.638785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.638800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.638830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.645718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.645835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.645866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.645884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.645934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.645969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.645987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.646001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.646031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.649536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.649655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.649687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.649704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.649737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.649769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.649786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.649800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.649830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.656508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.656649] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.656681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.656699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.656739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.656770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.656786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.656801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.656831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.660167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.660282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.660314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.660332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.660364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.660395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.660412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.660426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.660457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.666620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.667299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.667344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.667365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.667525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.667640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.667661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.667675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.667713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.671005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.671127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.671159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.671176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.671229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.671261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.671279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.671292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.671323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.678438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.678609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.678643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.678661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.678695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.678726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.678744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.678758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.678789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.681820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.681954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.681987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.682005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.682038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.682069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.682086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.682105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.682136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.688944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.689058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.689089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.689107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.689139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.689170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.689187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.689207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.689247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.692799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.692928] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.692960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.692978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.693011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.693042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.693059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.693073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.693104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.699659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.699793] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.699825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.699843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.699876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.699907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.699943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.699958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.699989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.703349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.703463] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.703494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.703511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.703544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.703574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.703591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.703605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.703635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.710490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.710620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.710671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.710691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.710725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.710757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.710774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.710788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.710818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.714040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.714173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.714205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.053 [2024-10-01 13:52:42.714222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.053 [2024-10-01 13:52:42.714255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.053 [2024-10-01 13:52:42.714285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.053 [2024-10-01 13:52:42.714302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.053 [2024-10-01 13:52:42.714316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.053 [2024-10-01 13:52:42.714346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.053 [2024-10-01 13:52:42.721511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.053 [2024-10-01 13:52:42.721625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.053 [2024-10-01 13:52:42.721657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.053 [2024-10-01 13:52:42.721674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.721707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.721738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.721755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.721769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.721800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.724142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.724800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.724843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.724863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.725052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.725193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.725222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.725236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.725275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.732190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.732314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.732347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.732365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.732397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.732428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.732446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.732460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.732499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.736050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.736195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.736228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.736245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.736279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.736309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.736335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.736349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.736380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.743065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.743186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.743218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.743236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.743268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.743308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.743328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.743342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.743391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.746811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.746949] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.746983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.747001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.747034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.747066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.747082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.747096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.747128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.753157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.753268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.753299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.753316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.753348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.753956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.753994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.754013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.754175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.757491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.757733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.757776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.757797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.757906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.757968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.757988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.758003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.758045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.765209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.765369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.765411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.765450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.765486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.765518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.765535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.765550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.765581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.767581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.767691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.767722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.767740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.767772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.767803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.767819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.767833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.767864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.775982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.776100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.776132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.776151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.776184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.776216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.776234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.776248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.776278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.779629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.779990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.780034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.780055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.780125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.780164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.780207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.780223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.780256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.786157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.786851] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.786898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.786936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.787105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.787233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.787264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.787281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.787322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.790733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.790849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.790880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.790899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.790949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.790983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.791001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.791015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.791047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.796490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.796609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.796641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.796659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.796691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.796722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.796739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.796753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.796784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.800962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.801689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.801733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.054 [2024-10-01 13:52:42.801755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.801959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.802080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.802101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.802116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.802156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.806716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.806850] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.806882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.054 [2024-10-01 13:52:42.806900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.054 [2024-10-01 13:52:42.806951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.054 [2024-10-01 13:52:42.806985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.054 [2024-10-01 13:52:42.807003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.054 [2024-10-01 13:52:42.807018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.054 [2024-10-01 13:52:42.807049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.054 [2024-10-01 13:52:42.811459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.054 [2024-10-01 13:52:42.811587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.054 [2024-10-01 13:52:42.811625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.811643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.811678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.811710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.811727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.811742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.811773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.816820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.816955] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.816989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.817007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.817068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.817130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.817152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.817167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.817198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.821563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.821690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.821722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.821740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.821773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.821804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.821821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.821836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.821867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.826928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.827049] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.827080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.827098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.827684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.827867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.827903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.827936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.828046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.831655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.831769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.831801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.831820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.831853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.831885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.831902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.831958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.831994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.839161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.839321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.839365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.839386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.839420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.839452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.839469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.839484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.839516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.841747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.841859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.841896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.841932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.842522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.842724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.842767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.842785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.842895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.850016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.850146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.850178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.850197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.850230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.850261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.850279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.850293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.850323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.853962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.854113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.854184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.854207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.854243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.854275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.854292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.854306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.854337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.860229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.860890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.860949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.860971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.861144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.861270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.861315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.861333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.861373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.864787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.864902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.864954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.864974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.865008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.865039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.865056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.865070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.865100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 8200.00 IOPS, 32.03 MiB/s [2024-10-01 13:52:42.871158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.871275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.871307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.871325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.871358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.871994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.872031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.872050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.872213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.874960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.875622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.875667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.875688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.875859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.875997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.876029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.876046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.876088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.883363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.883529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.883572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.883594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.883645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.883681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.883699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.883714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.883745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.885219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.885331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.885361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.885379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.885411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.885443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.885470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.885484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.885543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.894167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.894307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.894340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.894359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.894393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.894424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.894442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.894457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.894487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.895304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.895414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.895446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.895465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.895498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.895530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.895547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.895561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.895591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.904349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.904481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.904514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.055 [2024-10-01 13:52:42.904532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.905130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.905319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.905356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.905375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.905489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.055 [2024-10-01 13:52:42.905570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.055 [2024-10-01 13:52:42.905678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.055 [2024-10-01 13:52:42.905715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.055 [2024-10-01 13:52:42.905780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.055 [2024-10-01 13:52:42.905816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.055 [2024-10-01 13:52:42.907097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.055 [2024-10-01 13:52:42.907137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.055 [2024-10-01 13:52:42.907156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.055 [2024-10-01 13:52:42.907389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.914657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.914782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.914825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.914844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.914878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.914924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.914946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.914961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.914993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.916332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.916462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.916493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.916511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.916544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.916575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.916592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.916607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.916637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.924757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.924893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.924940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.924960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.924999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.925031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.925072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.925088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.925120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.927552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.927735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.927778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.927798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.927833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.927865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.927882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.927896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.927943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.934861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.935013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.935046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.935065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.935098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.935130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.935148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.935163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.936400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.938375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.938489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.938527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.938558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.938593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.938624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.938642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.938656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.938687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.945676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.945809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.945843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.945862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.945896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.945948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.945969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.945984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.946014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.948465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.949149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.949194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.949215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.949392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.949520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.949550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.949568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.949629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.956686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.957057] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.957102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.957124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.957196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.957235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.957254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.957269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.957301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.958816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.958944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.958976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.958995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.959055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.959087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.959105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.959119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.959151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.967641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.967756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.967787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.967805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.967838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.967869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.967886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.967901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.967946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.968903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.969025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.969062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.969081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.969114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.969145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.969162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.969176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.969206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.981374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.981501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.981802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.981865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.981907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.982005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.982036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.982092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.983765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.983822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.985690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.985742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.985767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.985796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.985815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.985833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.986883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.986955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.993885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.994032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:42.995494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.995570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:42.995601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.995677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:42.995708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:42.995728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:42.996681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.996743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:42.998693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.998745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.998770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.998802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:42.998822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:42.998839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:42.999199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:42.999242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.056 [2024-10-01 13:52:43.007017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:43.007105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.056 [2024-10-01 13:52:43.007500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:43.007559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.056 [2024-10-01 13:52:43.007587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:43.007656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.056 [2024-10-01 13:52:43.007685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.056 [2024-10-01 13:52:43.007708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.056 [2024-10-01 13:52:43.008066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:43.008117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.056 [2024-10-01 13:52:43.010128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:43.010177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:43.010202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.056 [2024-10-01 13:52:43.010226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.056 [2024-10-01 13:52:43.010246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.056 [2024-10-01 13:52:43.010263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.011268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.011316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.020773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.020865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.022061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.022122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.022156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.022232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.022263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.022284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.024434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.024495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.025702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.025752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.025778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.025808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.025883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.025904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.027907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.027983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.034055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.034138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.035449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.035511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.035539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.035607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.035637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.035662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.036588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.036643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.036865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.036942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.036969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.036995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.037015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.037034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.037182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.037225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.045163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.045236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.045393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.045434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.045458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.045523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.045552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.045581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.046846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.046901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.047219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.047266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.047290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.047315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.047335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.047352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.048933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.048975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.055341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.055444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.055590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.055633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.055669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.057378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.057431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.057457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.057484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.057814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.057862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.057885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.057905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.059184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.059237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.059262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.059281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.060557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.066383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.066458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.066626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.066722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.066750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.066818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.066854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.066875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.068695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.068749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.069894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.069969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.069997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.070021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.070040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.070058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.072027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.072073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.076574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.076669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.076797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.076837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.076862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.078488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.078553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.078583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.078611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.079714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.079765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.079788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.079808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.079983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.080017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.080085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.080120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.080875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.088246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.088330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.089675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.089731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.089758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.089843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.089875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.089897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.090202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.090244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.091466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.091515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.091540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.091564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.091583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.091601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.091884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.091944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.099625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.099688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.099832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.099872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.099895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.099990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.100036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.100058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.101667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.101767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.102147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.102194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.102217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.102241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.057 [2024-10-01 13:52:43.102272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.057 [2024-10-01 13:52:43.102293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.057 [2024-10-01 13:52:43.103544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.103591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.057 [2024-10-01 13:52:43.110858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.110953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.057 [2024-10-01 13:52:43.111134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.111185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.057 [2024-10-01 13:52:43.111210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.111275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.057 [2024-10-01 13:52:43.111307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.057 [2024-10-01 13:52:43.111328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.057 [2024-10-01 13:52:43.113154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.057 [2024-10-01 13:52:43.113208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.114406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.114454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.114479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.114506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.114525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.114565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.116502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.116550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.121118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.121192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.121362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.121405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.058 [2024-10-01 13:52:43.121485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.121560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.121591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.058 [2024-10-01 13:52:43.121612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.123219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.123274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.124423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.124472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.124497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.124522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.124541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.124558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.125432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.125477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.133885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.133972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.134360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.134417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.058 [2024-10-01 13:52:43.134444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.134519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.134570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.058 [2024-10-01 13:52:43.134593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.135772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.135835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.136153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.136198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.136232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.136257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.136277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.136294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.137869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.137929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.144091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.144156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.144309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.144349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.058 [2024-10-01 13:52:43.144372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.144433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.144463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.058 [2024-10-01 13:52:43.144483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.146099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.146154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.146457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.146506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.146530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.146578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.146601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.146618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.147818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.147864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.154274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.155124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.155259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.155305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.058 [2024-10-01 13:52:43.155333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.155596] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.155646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.058 [2024-10-01 13:52:43.155670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.155695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.155869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.155977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.156003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.156021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.157798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.157846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.157869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.157888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.159106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.165692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.165757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.165889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.165947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.058 [2024-10-01 13:52:43.165972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.166039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.166069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.058 [2024-10-01 13:52:43.166094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.166137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.166166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.167722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.167773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.167798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.167821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.167840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.167857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.169019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.169069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.177657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.177728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.179173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.179233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.058 [2024-10-01 13:52:43.179260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.179376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.179411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.058 [2024-10-01 13:52:43.179438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.179689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.179730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.058 [2024-10-01 13:52:43.180950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.180999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.181025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.181048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.058 [2024-10-01 13:52:43.181067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.058 [2024-10-01 13:52:43.181084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.058 [2024-10-01 13:52:43.181359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.181392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.058 [2024-10-01 13:52:43.189259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.189330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.058 [2024-10-01 13:52:43.189477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.189516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.058 [2024-10-01 13:52:43.189539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.058 [2024-10-01 13:52:43.189600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.058 [2024-10-01 13:52:43.189629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.058 [2024-10-01 13:52:43.189658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.191282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.191343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.191646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.191696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.191720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.191744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.191774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.191792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.193024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.193115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.200349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.200416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.200663] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.200723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.059 [2024-10-01 13:52:43.200749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.200814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.200844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.059 [2024-10-01 13:52:43.200864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.202689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.202751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.203977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.204027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.204060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.204085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.204105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.204122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.204360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.204396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.210635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.210734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.210879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.210936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.059 [2024-10-01 13:52:43.210963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.211037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.211070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.059 [2024-10-01 13:52:43.211095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.212640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.212709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.213857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.213971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.213998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.214023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.214042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.214059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.214946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.214993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.222354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.222445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.223900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.223977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.059 [2024-10-01 13:52:43.224006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.224084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.224117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.059 [2024-10-01 13:52:43.224138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.224395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.224449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.225646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.225695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.225720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.225744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.225763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.225780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.227618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.227668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.233688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.233766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.233930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.233971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.059 [2024-10-01 13:52:43.233994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.234067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.234130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.059 [2024-10-01 13:52:43.234153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.235778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.235846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.236194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.236238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.236261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.236284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.236303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.236320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.237537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.237586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.244849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.245109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.245260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.245320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.059 [2024-10-01 13:52:43.245347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.247313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.247380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.059 [2024-10-01 13:52:43.247408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.247442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.248620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.248673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.248697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.248718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.249022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.249070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.249092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.249111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.249273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.257005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.258307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.258513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.258588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.059 [2024-10-01 13:52:43.258617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.259652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.259709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.059 [2024-10-01 13:52:43.259735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.259763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.260032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.260070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.260090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.260112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.260267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.260297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.059 [2024-10-01 13:52:43.260319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.059 [2024-10-01 13:52:43.260336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.059 [2024-10-01 13:52:43.262167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.059 [2024-10-01 13:52:43.269538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.270024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.059 [2024-10-01 13:52:43.270085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.059 [2024-10-01 13:52:43.270114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.059 [2024-10-01 13:52:43.270203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.059 [2024-10-01 13:52:43.270257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.059 [2024-10-01 13:52:43.271900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.271983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.060 [2024-10-01 13:52:43.272009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.272031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.272049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.272070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.273218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.273283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.274216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.274267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.274291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.274554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.281464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.282798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.282975] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.283025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.060 [2024-10-01 13:52:43.283049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.283394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.283459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.060 [2024-10-01 13:52:43.283487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.283513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.284706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.284764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.284788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.284808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.285085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.285129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.285150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.285169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.286724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.292810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.293018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.293069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.060 [2024-10-01 13:52:43.293093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.293155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.293219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.293261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.293327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.293348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.295029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.295142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.295179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.060 [2024-10-01 13:52:43.295202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.295506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.296698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.296748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.296775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.298080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.302954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.303108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.303149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.060 [2024-10-01 13:52:43.303171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.303941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.304209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.304254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.304277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.304436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.304470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.304579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.304616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.060 [2024-10-01 13:52:43.304637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.306419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.307600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.307651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.307680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.307932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.314275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.314459] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.314500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.060 [2024-10-01 13:52:43.314522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.314593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.316159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.316212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.316238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.317395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.317451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.317668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.317708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.060 [2024-10-01 13:52:43.317731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.318505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.318773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.318819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.318841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.319005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.326923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.327380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.327441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.060 [2024-10-01 13:52:43.327469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.328700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.330585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.330637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.330663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.331812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.331877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.332902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.332977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.060 [2024-10-01 13:52:43.333004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.333246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.333466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.333509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.333531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.335357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.341149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.342738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.342799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.060 [2024-10-01 13:52:43.342827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.343160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.344738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.344809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.344837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.344858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.346011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.346125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.346163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.060 [2024-10-01 13:52:43.346185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.060 [2024-10-01 13:52:43.347089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.060 [2024-10-01 13:52:43.347334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.060 [2024-10-01 13:52:43.347370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.060 [2024-10-01 13:52:43.347391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.060 [2024-10-01 13:52:43.347532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.060 [2024-10-01 13:52:43.353182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.060 [2024-10-01 13:52:43.354092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.060 [2024-10-01 13:52:43.354143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.354166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.354551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.354722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.354755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.354774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.354858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.354945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.355043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.355078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.061 [2024-10-01 13:52:43.355106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.355141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.355172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.355190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.355204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.355235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.364098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.364368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.364416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.364439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.364489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.364531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.364550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.364567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.364600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.367386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.368290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.368339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.061 [2024-10-01 13:52:43.368361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.368725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.368907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.368957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.368976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.369021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.374227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.374362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.374398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.374459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.375094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.375304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.375332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.375359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.375476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.378159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.378416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.378467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.061 [2024-10-01 13:52:43.378490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.378627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.378677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.378696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.378713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.378752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.384895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.385062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.385097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.385116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.385152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.385184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.385208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.385229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.385271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.388276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.388407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.388441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.061 [2024-10-01 13:52:43.388460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.388495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.389118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.389189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.389210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.389413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.397021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.397298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.397359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.397382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.397429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.397465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.397483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.397499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.397533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.399209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.399335] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.399371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.061 [2024-10-01 13:52:43.399398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.399434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.399466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.399483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.399499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.399530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.407145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.407293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.407328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.407347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.407381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.407414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.407431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.407446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.407485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.411075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.411511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.411558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.061 [2024-10-01 13:52:43.411581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.411734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.411783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.411804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.411819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.411852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.418013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.418159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.418194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.418213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.418247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.418279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.418297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.418312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.418344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.421230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.421349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.421390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.061 [2024-10-01 13:52:43.421413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.421448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.061 [2024-10-01 13:52:43.421479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.061 [2024-10-01 13:52:43.421497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.061 [2024-10-01 13:52:43.421512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.061 [2024-10-01 13:52:43.421543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.061 [2024-10-01 13:52:43.428121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.061 [2024-10-01 13:52:43.428254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.061 [2024-10-01 13:52:43.428287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.061 [2024-10-01 13:52:43.428306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.061 [2024-10-01 13:52:43.429555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.430488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.430552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.430578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.430696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.431325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.432033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.432080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.432102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.432281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.432403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.432426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.432441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.432482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.439485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.439856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.439903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.439941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.440109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.440159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.440179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.440194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.440226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.441950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.442075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.442109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.442128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.442162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.442194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.442212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.442265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.442300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.449681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.449869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.449905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.449943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.449982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.450015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.450033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.450049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.450081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.454048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.454348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.454398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.454420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.454466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.454517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.454552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.454573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.454608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.460568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.460825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.460862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.460891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.460957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.460995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.461013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.461029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.461063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.464186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.464366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.464402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.464422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.464458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.464491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.464511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.464536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.464575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.470699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.470880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.470935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.470959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.472244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.473206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.473262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.473283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.473404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.474882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.475050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.475091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.475112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.475147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.475179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.475205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.475226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.475260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.482430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.482743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.482785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.482817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.482871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.482961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.482983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.482999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.483033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.485011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.485145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.485179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.485198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.486456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.487428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.487477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.487499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.487621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.492564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.492720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.492757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.492776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.492811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.492843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.492860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.492876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.492908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.496617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.496863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.496938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.496964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.497009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.497045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.497063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.497079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.497147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.503385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.503543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.503579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.503598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.503643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.503676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.503696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.503723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.503767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.506723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.506853] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.506887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.506906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.506979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.507016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.507034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.507051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.507093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.513499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.513668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.513705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.062 [2024-10-01 13:52:43.513724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.513760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.513793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.513810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.062 [2024-10-01 13:52:43.513826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.062 [2024-10-01 13:52:43.515096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.062 [2024-10-01 13:52:43.516841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.062 [2024-10-01 13:52:43.517561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.062 [2024-10-01 13:52:43.517610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.062 [2024-10-01 13:52:43.517662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.062 [2024-10-01 13:52:43.517854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.062 [2024-10-01 13:52:43.518005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.062 [2024-10-01 13:52:43.518030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.518060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.518111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.525304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.525580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.525628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.525650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.525696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.525731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.525750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.525767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.525799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.527524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.527651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.527684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.527702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.527741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.527779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.527797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.527812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.527844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.535426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.535561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.535595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.535621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.535671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.535704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.535761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.535779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.535811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.539279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.539656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.539703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.539725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.539875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.539944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.539966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.539983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.540027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.546195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.546361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.546396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.546415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.546456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.546492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.546510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.546526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.546574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.549411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.549550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.549585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.549604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.549640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.549673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.549691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.549707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.549740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.556321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.556484] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.556525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.556546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.557802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.558818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.558865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.558888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.559036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.560345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.560490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.560525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.560543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.560589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.560638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.560655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.560671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.560702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.567862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.568181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.568231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.568267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.568316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.568362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.568380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.568397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.568430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.570447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.570601] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.570637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.570656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.571951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.572889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.572950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.572980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.573098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.578012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.578167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.578203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.578223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.578259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.578291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.578309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.578325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.578355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.582079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.582322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.582361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.582381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.582425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.582465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.582498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.582516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.582566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.588719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.588891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.588949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.588977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.589016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.589052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.589069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.589116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.589150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.592184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.592317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.592356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.592377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.592412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.592444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.592462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.592478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.592509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.598842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.598999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.599034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.599054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.599089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.599121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.599138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.599165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.600393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.603055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.603203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.603238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.603257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.603309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.603343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.603361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.603376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.603408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.610388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.610796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.610850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.610874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.611029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.611078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.611108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.611136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.611176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.613160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.613280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.613313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.613331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.613365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.613402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.613420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.613434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.613465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.620616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.620782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.620817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.063 [2024-10-01 13:52:43.620836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.620872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.063 [2024-10-01 13:52:43.620935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.063 [2024-10-01 13:52:43.620959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.063 [2024-10-01 13:52:43.620976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.063 [2024-10-01 13:52:43.621010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.063 [2024-10-01 13:52:43.625025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.063 [2024-10-01 13:52:43.625298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.063 [2024-10-01 13:52:43.625346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.063 [2024-10-01 13:52:43.625368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.063 [2024-10-01 13:52:43.625415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.625491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.625511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.625527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.625560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.631340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.631653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.631702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.064 [2024-10-01 13:52:43.631724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.631845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.631889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.631908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.631958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.631997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.635132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.635274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.635309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.064 [2024-10-01 13:52:43.635328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.635363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.635396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.635413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.635429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.635462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.641480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.641630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.641666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.064 [2024-10-01 13:52:43.641685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.641725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.641761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.641779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.641795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.643106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.645721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.645862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.645896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.064 [2024-10-01 13:52:43.645933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.645986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.646021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.646039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.646055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.646087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.653264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.653536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.653582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.064 [2024-10-01 13:52:43.653609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.653655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.653691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.653709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.653726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.653758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.655822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.655967] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.656002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.064 [2024-10-01 13:52:43.656036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.657291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.658246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.658290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.658311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.658443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.663389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.663540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.663576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.064 [2024-10-01 13:52:43.663638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.663676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.663710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.663727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.663743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.663783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.667129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.667497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.667543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.064 [2024-10-01 13:52:43.667564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.667716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.667766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.667785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.667800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.667833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.674045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.674186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.674221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.064 [2024-10-01 13:52:43.674240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.674275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.674313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.674333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.674348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.674378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.677235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.677354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.677394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.064 [2024-10-01 13:52:43.677414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.677449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.677481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.677527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.677543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.677576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.684141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.684270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.684313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.064 [2024-10-01 13:52:43.684332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.685543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.064 [2024-10-01 13:52:43.686461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.064 [2024-10-01 13:52:43.686506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.064 [2024-10-01 13:52:43.686527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.064 [2024-10-01 13:52:43.686657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.064 [2024-10-01 13:52:43.687334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.064 [2024-10-01 13:52:43.688030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.064 [2024-10-01 13:52:43.688083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.064 [2024-10-01 13:52:43.688105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.064 [2024-10-01 13:52:43.688285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.688412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.688435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.688450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.688490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.695457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.695586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.695619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.065 [2024-10-01 13:52:43.695640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.695904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.696108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.696146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.696166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.696208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.697990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.698106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.698139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.065 [2024-10-01 13:52:43.698167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.698201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.698232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.698250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.698270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.698303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.705854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.706086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.706129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.065 [2024-10-01 13:52:43.706161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.706207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.706255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.706278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.706299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.706338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.710251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.710622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.710664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.065 [2024-10-01 13:52:43.710689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.710742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.710802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.710828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.710849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.710889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.716814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.716998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.717045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.065 [2024-10-01 13:52:43.717067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.717134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.717167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.717185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.717200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.717231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.720373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.720510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.720544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.065 [2024-10-01 13:52:43.720563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.720596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.720648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.720671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.720686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.720719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.726948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.727091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.727135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.065 [2024-10-01 13:52:43.727156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.728369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.729267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.729308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.729329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.729474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.731049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.731178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.731221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.065 [2024-10-01 13:52:43.731242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.731276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.731308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.731326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.731376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.731411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.738239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.738611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.738657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.065 [2024-10-01 13:52:43.738677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.738821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.738879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.738927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.738945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.738978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.065 [2024-10-01 13:52:43.741147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.065 [2024-10-01 13:52:43.741262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.065 [2024-10-01 13:52:43.741304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.065 [2024-10-01 13:52:43.741324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.065 [2024-10-01 13:52:43.741357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.065 [2024-10-01 13:52:43.741389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.065 [2024-10-01 13:52:43.741406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.065 [2024-10-01 13:52:43.741420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.065 [2024-10-01 13:52:43.742642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.748578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.748724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.748769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.066 [2024-10-01 13:52:43.748791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.748827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.748860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.748877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.748893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.748945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.752688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.753126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.753169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.066 [2024-10-01 13:52:43.753193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.753330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.753376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.753404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.753427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.753463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.758683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.758808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.758850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.066 [2024-10-01 13:52:43.758871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.759490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.759686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.759723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.759741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.759861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.763036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.763153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.763187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.066 [2024-10-01 13:52:43.763205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.763239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.763271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.763289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.763304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.763336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.769421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.769552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.769587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.066 [2024-10-01 13:52:43.769606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.769670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.769704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.769723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.769738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.769769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.773892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.774029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.774061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.066 [2024-10-01 13:52:43.774080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.774114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.774145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.774163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.774178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.774210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.781091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.781214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.781252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.066 [2024-10-01 13:52:43.781270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.781530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.781680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.781711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.781728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.781770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.783994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.784105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.784137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.066 [2024-10-01 13:52:43.784156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.784188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.784220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.784238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.784252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.785473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.791401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.791529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.791561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.066 [2024-10-01 13:52:43.791588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.791622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.791653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.791671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.791686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.791718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.795414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.795541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.795574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.066 [2024-10-01 13:52:43.795593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.066 [2024-10-01 13:52:43.795852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.066 [2024-10-01 13:52:43.796038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.066 [2024-10-01 13:52:43.796073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.066 [2024-10-01 13:52:43.796091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.066 [2024-10-01 13:52:43.796134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.066 [2024-10-01 13:52:43.801502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.066 [2024-10-01 13:52:43.801661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.066 [2024-10-01 13:52:43.801698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.067 [2024-10-01 13:52:43.801717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.802385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.802653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.802690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.802714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.802871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.805957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.806105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.806140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.067 [2024-10-01 13:52:43.806193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.806231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.806265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.806284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.806299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.806331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.812425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.812553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.812586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.067 [2024-10-01 13:52:43.812604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.812638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.812669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.812687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.812703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.812734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.816055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.816167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.816198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.067 [2024-10-01 13:52:43.816217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.816801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.817036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.817066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.817083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.817202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.824195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.824372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.824405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.067 [2024-10-01 13:52:43.824423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.824680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.824837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.824897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.824937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.825007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.826819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.826944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.826977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.067 [2024-10-01 13:52:43.826996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.827030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.827061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.827079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.827107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.827139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.834520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.834656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.834688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.067 [2024-10-01 13:52:43.834706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.834740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.834771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.834789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.834803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.834845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.838489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.838611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.838643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.067 [2024-10-01 13:52:43.838661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.838943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.839102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.839127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.839143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.839183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.844643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.844821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.844866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.067 [2024-10-01 13:52:43.844892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.845570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.845797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.845840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.845863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.846012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.848803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.848958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.848991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.067 [2024-10-01 13:52:43.849010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.849045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.849077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.067 [2024-10-01 13:52:43.849095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.067 [2024-10-01 13:52:43.849109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.067 [2024-10-01 13:52:43.849141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.067 [2024-10-01 13:52:43.855440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.067 [2024-10-01 13:52:43.855630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.067 [2024-10-01 13:52:43.855666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.067 [2024-10-01 13:52:43.855686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.067 [2024-10-01 13:52:43.855722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.067 [2024-10-01 13:52:43.855755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.855773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.855800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.855833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 [2024-10-01 13:52:43.858898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.859047] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.859080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.068 [2024-10-01 13:52:43.859141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.859754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.859977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.860012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.860029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.860148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 [2024-10-01 13:52:43.865874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.866039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.866074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.068 [2024-10-01 13:52:43.866094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.866292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.866386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.866411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.866428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.866462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 8240.93 IOPS, 32.19 MiB/s [2024-10-01 13:52:43.870137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.870723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.870769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.068 [2024-10-01 13:52:43.870790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.870964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.871087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.871110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.871127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.871241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 00:18:35.068 Latency(us) 00:18:35.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.068 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.068 Verification LBA range: start 0x0 length 0x4000 00:18:35.068 NVMe0n1 : 15.01 8240.91 32.19 0.00 0.00 15497.88 1608.61 20614.05 00:18:35.068 =================================================================================================================== 00:18:35.068 Total : 8240.91 32.19 0.00 0.00 15497.88 1608.61 20614.05 00:18:35.068 [2024-10-01 13:52:43.876955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.877153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.877189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.068 [2024-10-01 13:52:43.877210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.877235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.877262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.877278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.877294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.877314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 [2024-10-01 13:52:43.880219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.880370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.880409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.068 [2024-10-01 13:52:43.880434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.880464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.880490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.880511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.880532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.880567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 [2024-10-01 13:52:43.887093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.887294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.887329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.068 [2024-10-01 13:52:43.887349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.887375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.887397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.887414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.887430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.887468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 [2024-10-01 13:52:43.890302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.890409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.890439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.068 [2024-10-01 13:52:43.890458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.890481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.890549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.890569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.890584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.890603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 [2024-10-01 13:52:43.897211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.897354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.897386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.068 [2024-10-01 13:52:43.897405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.897429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.897449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.068 [2024-10-01 13:52:43.897466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.068 [2024-10-01 13:52:43.897481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.068 [2024-10-01 13:52:43.897500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.068 [2024-10-01 13:52:43.900363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.068 [2024-10-01 13:52:43.900453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.068 [2024-10-01 13:52:43.900481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.068 [2024-10-01 13:52:43.900500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.068 [2024-10-01 13:52:43.900522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.068 [2024-10-01 13:52:43.900542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.069 [2024-10-01 13:52:43.900557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.069 [2024-10-01 13:52:43.900572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.069 [2024-10-01 13:52:43.900590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.069 [2024-10-01 13:52:43.907295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.069 [2024-10-01 13:52:43.907405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.069 [2024-10-01 13:52:43.907437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.069 [2024-10-01 13:52:43.907455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.069 [2024-10-01 13:52:43.907478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.069 [2024-10-01 13:52:43.907513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.069 [2024-10-01 13:52:43.907532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.069 [2024-10-01 13:52:43.907548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.069 [2024-10-01 13:52:43.907596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.069 [2024-10-01 13:52:43.910420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.069 [2024-10-01 13:52:43.910506] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.069 [2024-10-01 13:52:43.910545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.069 [2024-10-01 13:52:43.910566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.069 [2024-10-01 13:52:43.910588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.069 [2024-10-01 13:52:43.910608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.069 [2024-10-01 13:52:43.910622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.069 [2024-10-01 13:52:43.910636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.069 [2024-10-01 13:52:43.910656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.069 [2024-10-01 13:52:43.917368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.069 [2024-10-01 13:52:43.917492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.069 [2024-10-01 13:52:43.917523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.069 [2024-10-01 13:52:43.917541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.069 [2024-10-01 13:52:43.917564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.069 [2024-10-01 13:52:43.917584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.069 [2024-10-01 13:52:43.917598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.069 [2024-10-01 13:52:43.917613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.069 [2024-10-01 13:52:43.917631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.069 [2024-10-01 13:52:43.920475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.069 [2024-10-01 13:52:43.920569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.069 [2024-10-01 13:52:43.920598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.069 [2024-10-01 13:52:43.920617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.069 [2024-10-01 13:52:43.920639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.069 [2024-10-01 13:52:43.920659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.069 [2024-10-01 13:52:43.920674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.069 [2024-10-01 13:52:43.920689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.069 [2024-10-01 13:52:43.920708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.069 [2024-10-01 13:52:43.927454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.069 [2024-10-01 13:52:43.927617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.069 [2024-10-01 13:52:43.927648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb280 with addr=10.0.0.3, port=4421 00:18:35.069 [2024-10-01 13:52:43.927701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb280 is same with the state(6) to be set 00:18:35.069 [2024-10-01 13:52:43.927750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bb280 (9): Bad file descriptor 00:18:35.069 [2024-10-01 13:52:43.927775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.069 [2024-10-01 13:52:43.927790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.069 [2024-10-01 13:52:43.927806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.069 [2024-10-01 13:52:43.927825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.069 [2024-10-01 13:52:43.930545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.069 [2024-10-01 13:52:43.930646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.069 [2024-10-01 13:52:43.930675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b39a0 with addr=10.0.0.3, port=4422 00:18:35.069 [2024-10-01 13:52:43.930694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b39a0 is same with the state(6) to be set 00:18:35.069 [2024-10-01 13:52:43.930715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b39a0 (9): Bad file descriptor 00:18:35.069 [2024-10-01 13:52:43.930735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.069 [2024-10-01 13:52:43.930749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.069 [2024-10-01 13:52:43.930764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.069 [2024-10-01 13:52:43.930782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.069 Received shutdown signal, test time was about 15.000000 seconds 00:18:35.069 00:18:35.069 Latency(us) 00:18:35.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.069 =================================================================================================================== 00:18:35.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:35.069 Process with pid 75867 is not found 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # killprocess 75867 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75867 ']' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75867 00:18:35.069 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75867) - No such process 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # echo 'Process with pid 75867 is not found' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # nvmftestfini 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:35.069 rmmod nvme_tcp 00:18:35.069 rmmod nvme_fabrics 00:18:35.069 rmmod nvme_keyring 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 75804 ']' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 75804 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75804 ']' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75804 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75804 00:18:35.069 killing process with pid 75804 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75804' 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75804 00:18:35.069 13:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75804 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.333 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # exit 1 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # trap - ERR 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # print_backtrace 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh' 'nvmf_failover' '--transport=tcp') 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:35.334 ========== Backtrace start: ========== 00:18:35.334 00:18:35.334 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_failover"],["/home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh"],["--transport=tcp"]) 00:18:35.334 ... 00:18:35.334 1120 timing_enter $test_name 00:18:35.334 1121 echo "************************************" 00:18:35.334 1122 echo "START TEST $test_name" 00:18:35.334 1123 echo "************************************" 00:18:35.334 1124 xtrace_restore 00:18:35.334 1125 time "$@" 00:18:35.334 1126 xtrace_disable 00:18:35.334 1127 echo "************************************" 00:18:35.334 1128 echo "END TEST $test_name" 00:18:35.334 1129 echo "************************************" 00:18:35.334 1130 timing_exit $test_name 00:18:35.334 ... 00:18:35.334 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh:25 -> main(["--transport=tcp"]) 00:18:35.334 ... 00:18:35.334 20 fi 00:18:35.334 21 00:18:35.334 22 run_test "nvmf_identify" $rootdir/test/nvmf/host/identify.sh "${TEST_ARGS[@]}" 00:18:35.334 23 run_test "nvmf_perf" $rootdir/test/nvmf/host/perf.sh "${TEST_ARGS[@]}" 00:18:35.334 24 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:18:35.334 => 25 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:18:35.334 26 run_test "nvmf_host_discovery" $rootdir/test/nvmf/host/discovery.sh "${TEST_ARGS[@]}" 00:18:35.334 27 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:18:35.334 28 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:18:35.334 29 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:18:35.334 30 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:18:35.334 ... 00:18:35.334 00:18:35.334 ========== Backtrace end ========== 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:18:35.334 00:18:35.334 real 0m22.217s 00:18:35.334 user 1m20.816s 00:18:35.334 sys 0m4.864s 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1 -- # exit 1 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.334 ========== Backtrace start: ========== 00:18:35.334 00:18:35.334 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:18:35.334 ... 00:18:35.334 1120 timing_enter $test_name 00:18:35.334 1121 echo "************************************" 00:18:35.334 1122 echo "START TEST $test_name" 00:18:35.334 1123 echo "************************************" 00:18:35.334 1124 xtrace_restore 00:18:35.334 1125 time "$@" 00:18:35.334 1126 xtrace_disable 00:18:35.334 1127 echo "************************************" 00:18:35.334 1128 echo "END TEST $test_name" 00:18:35.334 1129 echo "************************************" 00:18:35.334 1130 timing_exit $test_name 00:18:35.334 ... 00:18:35.334 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh:16 -> main(["--transport=tcp"]) 00:18:35.334 ... 00:18:35.334 11 exit 0 00:18:35.334 12 fi 00:18:35.334 13 00:18:35.334 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 => 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 17 00:18:35.334 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:18:35.334 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:18:35.334 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:18:35.334 21 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:18:35.334 ... 00:18:35.334 00:18:35.334 ========== Backtrace end ========== 00:18:35.334 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:18:35.334 00:18:35.334 real 0m50.352s 00:18:35.334 user 3m0.505s 00:18:35.334 sys 0m12.349s 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.334 ========== Backtrace start: ========== 00:18:35.334 00:18:35.334 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:18:35.334 ... 00:18:35.334 1120 timing_enter $test_name 00:18:35.334 1121 echo "************************************" 00:18:35.334 1122 echo "START TEST $test_name" 00:18:35.334 1123 echo "************************************" 00:18:35.334 1124 xtrace_restore 00:18:35.334 1125 time "$@" 00:18:35.334 1126 xtrace_disable 00:18:35.334 1127 echo "************************************" 00:18:35.334 1128 echo "END TEST $test_name" 00:18:35.334 1129 echo "************************************" 00:18:35.334 1130 timing_exit $test_name 00:18:35.334 ... 00:18:35.334 in /home/vagrant/spdk_repo/spdk/autotest.sh:280 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:18:35.334 ... 00:18:35.334 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:18:35.334 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:18:35.334 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:18:35.334 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:18:35.334 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:18:35.334 284 fi 00:18:35.334 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:18:35.334 ... 00:18:35.334 00:18:35.334 ========== Backtrace end ========== 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:18:35.334 00:18:35.334 real 9m7.358s 00:18:35.334 user 21m56.053s 00:18:35.334 sys 2m17.684s 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:18:35.334 13:52:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.529 INFO: APP EXITING 00:18:47.529 INFO: killing all VMs 00:18:47.529 INFO: killing vhost app 00:18:47.529 INFO: EXIT DONE 00:18:47.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:47.529 Waiting for block devices as requested 00:18:47.529 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:47.787 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:48.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:48.723 Cleaning 00:18:48.723 Removing: /var/run/dpdk/spdk0/config 00:18:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:48.723 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:48.723 Removing: /var/run/dpdk/spdk1/config 00:18:48.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:18:48.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:18:48.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:18:48.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:18:48.723 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:18:48.723 Removing: /var/run/dpdk/spdk1/hugepage_info 00:18:48.723 Removing: /var/run/dpdk/spdk2/config 00:18:48.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:18:48.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:18:48.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:18:48.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:18:48.723 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:18:48.723 Removing: /var/run/dpdk/spdk2/hugepage_info 00:18:48.723 Removing: /var/run/dpdk/spdk3/config 00:18:48.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:18:48.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:18:48.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:18:48.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:18:48.723 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:18:48.723 Removing: /var/run/dpdk/spdk3/hugepage_info 00:18:48.723 Removing: /var/run/dpdk/spdk4/config 00:18:48.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:18:48.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:18:48.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:18:48.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:18:48.723 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:18:48.723 Removing: /var/run/dpdk/spdk4/hugepage_info 00:18:48.723 Removing: /dev/shm/nvmf_trace.0 00:18:48.723 Removing: /dev/shm/spdk_tgt_trace.pid56887 00:18:48.723 Removing: /var/run/dpdk/spdk0 00:18:48.723 Removing: /var/run/dpdk/spdk1 00:18:48.723 Removing: /var/run/dpdk/spdk2 00:18:48.723 Removing: /var/run/dpdk/spdk3 00:18:48.723 Removing: /var/run/dpdk/spdk4 00:18:48.723 Removing: /var/run/dpdk/spdk_pid56729 00:18:48.723 Removing: /var/run/dpdk/spdk_pid56887 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57092 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57174 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57207 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57317 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57327 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57467 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57668 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57821 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57894 00:18:48.723 Removing: /var/run/dpdk/spdk_pid57984 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58083 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58168 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58201 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58242 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58306 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58422 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58885 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58929 00:18:48.723 Removing: /var/run/dpdk/spdk_pid58980 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59002 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59069 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59083 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59150 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59166 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59217 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59235 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59275 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59304 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59439 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59470 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59553 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59892 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59904 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59941 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59954 00:18:48.723 Removing: /var/run/dpdk/spdk_pid59975 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60000 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60015 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60036 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60055 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60074 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60091 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60115 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60134 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60150 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60173 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60188 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60205 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60230 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60243 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60259 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60295 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60314 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60343 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60415 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60444 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60459 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60482 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60497 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60510 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60558 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60566 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60600 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60615 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60630 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60634 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60649 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60664 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60668 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60683 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60714 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60740 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60755 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60784 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60793 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60806 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60847 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60858 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60890 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60898 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60911 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60918 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60931 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60939 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60946 00:18:48.983 Removing: /var/run/dpdk/spdk_pid60957 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61039 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61092 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61221 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61254 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61294 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61314 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61336 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61356 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61391 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61406 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61484 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61511 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61555 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61639 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61701 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61731 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61835 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61883 00:18:48.983 Removing: /var/run/dpdk/spdk_pid61910 00:18:48.983 Removing: /var/run/dpdk/spdk_pid62142 00:18:48.983 Removing: /var/run/dpdk/spdk_pid62245 00:18:48.983 Removing: /var/run/dpdk/spdk_pid62268 00:18:48.983 Removing: /var/run/dpdk/spdk_pid62303 00:18:48.984 Removing: /var/run/dpdk/spdk_pid62342 00:18:48.984 Removing: /var/run/dpdk/spdk_pid62370 00:18:48.984 Removing: /var/run/dpdk/spdk_pid62410 00:18:48.984 Removing: /var/run/dpdk/spdk_pid62446 00:18:48.984 Removing: /var/run/dpdk/spdk_pid62848 00:18:48.984 Removing: /var/run/dpdk/spdk_pid62891 00:18:48.984 Removing: /var/run/dpdk/spdk_pid63245 00:18:48.984 Removing: /var/run/dpdk/spdk_pid63716 00:18:49.240 Removing: /var/run/dpdk/spdk_pid64003 00:18:49.240 Removing: /var/run/dpdk/spdk_pid64900 00:18:49.240 Removing: /var/run/dpdk/spdk_pid65842 00:18:49.240 Removing: /var/run/dpdk/spdk_pid65965 00:18:49.240 Removing: /var/run/dpdk/spdk_pid66027 00:18:49.240 Removing: /var/run/dpdk/spdk_pid67476 00:18:49.240 Removing: /var/run/dpdk/spdk_pid67800 00:18:49.240 Removing: /var/run/dpdk/spdk_pid71688 00:18:49.240 Removing: /var/run/dpdk/spdk_pid72067 00:18:49.240 Removing: /var/run/dpdk/spdk_pid72175 00:18:49.240 Removing: /var/run/dpdk/spdk_pid72311 00:18:49.240 Removing: /var/run/dpdk/spdk_pid72345 00:18:49.241 Removing: /var/run/dpdk/spdk_pid72379 00:18:49.241 Removing: /var/run/dpdk/spdk_pid72418 00:18:49.241 Removing: /var/run/dpdk/spdk_pid72524 00:18:49.241 Removing: /var/run/dpdk/spdk_pid72660 00:18:49.241 Removing: /var/run/dpdk/spdk_pid72835 00:18:49.241 Removing: /var/run/dpdk/spdk_pid72921 00:18:49.241 Removing: /var/run/dpdk/spdk_pid73120 00:18:49.241 Removing: /var/run/dpdk/spdk_pid73201 00:18:49.241 Removing: /var/run/dpdk/spdk_pid73286 00:18:49.241 Removing: /var/run/dpdk/spdk_pid73659 00:18:49.241 Removing: /var/run/dpdk/spdk_pid74096 00:18:49.241 Removing: /var/run/dpdk/spdk_pid74097 00:18:49.241 Removing: /var/run/dpdk/spdk_pid74098 00:18:49.241 Removing: /var/run/dpdk/spdk_pid74372 00:18:49.241 Removing: /var/run/dpdk/spdk_pid74707 00:18:49.241 Removing: /var/run/dpdk/spdk_pid74709 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75033 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75053 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75072 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75108 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75113 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75474 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75524 00:18:49.241 Removing: /var/run/dpdk/spdk_pid75867 00:18:49.241 Clean 00:18:55.806 13:53:05 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:18:55.806 13:53:05 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:18:55.806 13:53:05 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:18:56.384 [Pipeline] } 00:18:56.403 [Pipeline] // timeout 00:18:56.410 [Pipeline] } 00:18:56.427 [Pipeline] // stage 00:18:56.435 [Pipeline] } 00:18:56.438 ERROR: script returned exit code 1 00:18:56.438 Setting overall build result to FAILURE 00:18:56.454 [Pipeline] // catchError 00:18:56.465 [Pipeline] stage 00:18:56.467 [Pipeline] { (Stop VM) 00:18:56.481 [Pipeline] sh 00:18:56.757 + vagrant halt 00:19:00.943 ==> default: Halting domain... 00:19:07.564 [Pipeline] sh 00:19:07.858 + vagrant destroy -f 00:19:12.049 ==> default: Removing domain... 00:19:12.059 [Pipeline] sh 00:19:12.339 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:19:12.347 [Pipeline] } 00:19:12.362 [Pipeline] // stage 00:19:12.367 [Pipeline] } 00:19:12.380 [Pipeline] // dir 00:19:12.384 [Pipeline] } 00:19:12.397 [Pipeline] // wrap 00:19:12.402 [Pipeline] } 00:19:12.413 [Pipeline] // catchError 00:19:12.421 [Pipeline] stage 00:19:12.423 [Pipeline] { (Epilogue) 00:19:12.435 [Pipeline] sh 00:19:12.713 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:14.625 [Pipeline] catchError 00:19:14.627 [Pipeline] { 00:19:14.641 [Pipeline] sh 00:19:14.923 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:14.923 Artifacts sizes are good 00:19:14.933 [Pipeline] } 00:19:14.948 [Pipeline] // catchError 00:19:14.962 [Pipeline] archiveArtifacts 00:19:14.971 Archiving artifacts 00:19:15.272 [Pipeline] cleanWs 00:19:15.284 [WS-CLEANUP] Deleting project workspace... 00:19:15.284 [WS-CLEANUP] Deferred wipeout is used... 00:19:15.291 [WS-CLEANUP] done 00:19:15.293 [Pipeline] } 00:19:15.312 [Pipeline] // stage 00:19:15.320 [Pipeline] } 00:19:15.338 [Pipeline] // node 00:19:15.345 [Pipeline] End of Pipeline 00:19:15.407 Finished: FAILURE